Test Report: QEMU_macOS 20068

                    
                      3e5ae302b6a4bf4af6cc92954bf8488d685fb633:2024-12-09:37406
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 21.52
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.09
27 TestAddons/Setup 10.19
28 TestCertOptions 12.4
29 TestCertExpiration 197.87
30 TestDockerFlags 10.19
31 TestForceSystemdFlag 10.02
32 TestForceSystemdEnv 10.05
38 TestErrorSpam/setup 9.88
47 TestFunctional/serial/StartWithProxy 10.06
49 TestFunctional/serial/SoftStart 5.27
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.18
61 TestFunctional/serial/MinikubeKubectlCmd 0.72
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.2
63 TestFunctional/serial/ExtraConfig 5.27
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.21
73 TestFunctional/parallel/StatusCmd 0.14
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.04
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.3
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.33
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
95 TestFunctional/parallel/Version/components 0.05
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.13
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.05
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
110 TestFunctional/parallel/ServiceCmd/Format 0.05
111 TestFunctional/parallel/ServiceCmd/URL 0.05
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.09
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 70.17
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.32
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.29
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.06
141 TestMultiControlPlane/serial/StartCluster 10.1
142 TestMultiControlPlane/serial/DeployApp 104.38
143 TestMultiControlPlane/serial/PingHostFromPods 0.1
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
147 TestMultiControlPlane/serial/CopyFile 0.07
148 TestMultiControlPlane/serial/StopSecondaryNode 0.12
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
150 TestMultiControlPlane/serial/RestartSecondaryNode 42.33
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 9.13
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.12
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.09
155 TestMultiControlPlane/serial/StopCluster 4.09
156 TestMultiControlPlane/serial/RestartCluster 5.27
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.09
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.09
162 TestImageBuild/serial/Setup 9.88
165 TestJSONOutput/start/Command 9.91
171 TestJSONOutput/pause/Command 0.09
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.4
197 TestMountStart/serial/StartWithMountFirst 10.1
200 TestMultiNode/serial/FreshStart2Nodes 9.99
201 TestMultiNode/serial/DeployApp2Nodes 88.98
202 TestMultiNode/serial/PingHostFrom2Pods 0.1
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.09
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.16
208 TestMultiNode/serial/StartAfterStop 41.22
209 TestMultiNode/serial/RestartKeepsNodes 9.28
210 TestMultiNode/serial/DeleteNode 0.11
211 TestMultiNode/serial/StopMultiNode 3.56
212 TestMultiNode/serial/RestartMultiNode 5.28
213 TestMultiNode/serial/ValidateNameConflict 20.16
217 TestPreload 10.12
219 TestScheduledStopUnix 10.13
220 TestSkaffold 12.28
223 TestRunningBinaryUpgrade 627.73
225 TestKubernetesUpgrade 20.97
228 TestStoppedBinaryUpgrade/Upgrade 593.24
238 TestPause/serial/Start 10.07
241 TestNoKubernetes/serial/StartWithK8s 9.86
253 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.96
254 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.44
255 TestNoKubernetes/serial/StartWithStopK8s 5.34
256 TestNoKubernetes/serial/Start 5.33
260 TestNoKubernetes/serial/StartNoArgs 6.87
263 TestStartStop/group/old-k8s-version/serial/FirstStart 9.98
264 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
265 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
268 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
269 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
270 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
271 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
272 TestStartStop/group/old-k8s-version/serial/Pause 0.11
274 TestStartStop/group/no-preload/serial/FirstStart 9.92
275 TestStartStop/group/no-preload/serial/DeployApp 0.1
276 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
279 TestStartStop/group/no-preload/serial/SecondStart 5.27
280 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
281 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
282 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
283 TestStartStop/group/no-preload/serial/Pause 0.11
285 TestStartStop/group/embed-certs/serial/FirstStart 10.02
286 TestStartStop/group/embed-certs/serial/DeployApp 0.1
287 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
290 TestStartStop/group/embed-certs/serial/SecondStart 5.26
291 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
292 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
294 TestStartStop/group/embed-certs/serial/Pause 0.11
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.95
297 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
301 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
302 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
303 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
307 TestStartStop/group/newest-cni/serial/FirstStart 10.04
312 TestStartStop/group/newest-cni/serial/SecondStart 5.27
315 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
316 TestStartStop/group/newest-cni/serial/Pause 0.11
317 TestNetworkPlugins/group/auto/Start 9.93
318 TestNetworkPlugins/group/kindnet/Start 9.85
319 TestNetworkPlugins/group/calico/Start 10.08
320 TestNetworkPlugins/group/custom-flannel/Start 10.09
321 TestNetworkPlugins/group/false/Start 9.95
322 TestNetworkPlugins/group/enable-default-cni/Start 9.97
323 TestNetworkPlugins/group/flannel/Start 9.88
324 TestNetworkPlugins/group/bridge/Start 10.09
325 TestNetworkPlugins/group/kubenet/Start 11.3
x
+
TestDownloadOnly/v1.20.0/json-events (21.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-118000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-118000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (21.518671s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b32f6371-568b-40e7-8f44-ed6ced5f03d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-118000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d500a7a-07f5-4e1b-b8ba-bd3ef0f8eac0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20068"}}
	{"specversion":"1.0","id":"d8e797a9-2222-400b-9980-00ef1c01ae5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig"}}
	{"specversion":"1.0","id":"4df0b32a-6d29-4d97-aaa0-2048a5da7aa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1ee11bb0-f6ae-45f6-8230-c6bcceb04c26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a0b97343-1fd0-4d17-923a-9ab2118abf1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube"}}
	{"specversion":"1.0","id":"6c93edb1-e4ac-41a5-b95d-7587f7b9d4c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"9b1e3772-92e3-4e71-bd7c-c2da34584239","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a4cacf36-9628-41c2-8238-e64cf48a66b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"0ea762f8-f589-4a3f-b7de-f5b7f6125e9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"158dcc4e-ded7-40a3-aaf5-ae3975e1bd93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-118000\" primary control-plane node in \"download-only-118000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"76f948b5-9df9-43cc-bde2-72d6ad0ad832","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f22db44-60e3-402a-85fa-d72040dfe931","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1096dc320 0x1096dc320 0x1096dc320 0x1096dc320 0x1096dc320 0x1096dc320 0x1096dc320] Decompressors:map[bz2:0x14000514e40 gz:0x14000514e48 tar:0x14000514de0 tar.bz2:0x14000514df0 tar.gz:0x14000514e00 tar.xz:0x14000514e10 tar.zst:0x14000514e20 tbz2:0x14000514df0 tgz:0x14
000514e00 txz:0x14000514e10 tzst:0x14000514e20 xz:0x14000514e50 zip:0x14000514e60 zst:0x14000514e58] Getters:map[file:0x14000888680 http:0x14000e1e230 https:0x14000e1e280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"09ff6ddb-5bd5-44db-8efb-ef1272e8f1fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:21:34.939211    7821 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:21:34.939415    7821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:21:34.939418    7821 out.go:358] Setting ErrFile to fd 2...
	I1209 03:21:34.939420    7821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:21:34.939542    7821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	W1209 03:21:34.939632    7821 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20068-6536/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20068-6536/.minikube/config/config.json: no such file or directory
	I1209 03:21:34.941051    7821 out.go:352] Setting JSON to true
	I1209 03:21:34.959055    7821 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4865,"bootTime":1733738429,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:21:34.959137    7821 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:21:34.964984    7821 out.go:97] [download-only-118000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:21:34.965106    7821 notify.go:220] Checking for updates...
	W1209 03:21:34.965148    7821 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 03:21:34.967939    7821 out.go:169] MINIKUBE_LOCATION=20068
	I1209 03:21:34.970992    7821 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:21:34.975918    7821 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:21:34.980021    7821 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:21:34.982944    7821 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	W1209 03:21:34.988970    7821 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 03:21:34.989181    7821 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:21:34.991865    7821 out.go:97] Using the qemu2 driver based on user configuration
	I1209 03:21:34.991885    7821 start.go:297] selected driver: qemu2
	I1209 03:21:34.991907    7821 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:21:34.992003    7821 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:21:34.994929    7821 out.go:169] Automatically selected the socket_vmnet network
	I1209 03:21:35.001384    7821 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1209 03:21:35.001467    7821 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 03:21:35.001538    7821 cni.go:84] Creating CNI manager for ""
	I1209 03:21:35.001568    7821 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1209 03:21:35.001630    7821 start.go:340] cluster config:
	{Name:download-only-118000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-118000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:21:35.006369    7821 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:21:35.010966    7821 out.go:97] Downloading VM boot image ...
	I1209 03:21:35.010980    7821 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso
	I1209 03:21:46.043159    7821 out.go:97] Starting "download-only-118000" primary control-plane node in "download-only-118000" cluster
	I1209 03:21:46.043179    7821 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 03:21:46.101072    7821 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 03:21:46.101112    7821 cache.go:56] Caching tarball of preloaded images
	I1209 03:21:46.101323    7821 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 03:21:46.105516    7821 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1209 03:21:46.105524    7821 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1209 03:21:46.185965    7821 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 03:21:55.113310    7821 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1209 03:21:55.113595    7821 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1209 03:21:55.807719    7821 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1209 03:21:55.807917    7821 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/download-only-118000/config.json ...
	I1209 03:21:55.807933    7821 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/download-only-118000/config.json: {Name:mkb9b1b4d0abc72f7eea8177d8ece2e4cb09aaf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:21:55.808196    7821 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 03:21:55.808486    7821 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1209 03:21:56.372849    7821 out.go:193] 
	W1209 03:21:56.380832    7821 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1096dc320 0x1096dc320 0x1096dc320 0x1096dc320 0x1096dc320 0x1096dc320 0x1096dc320] Decompressors:map[bz2:0x14000514e40 gz:0x14000514e48 tar:0x14000514de0 tar.bz2:0x14000514df0 tar.gz:0x14000514e00 tar.xz:0x14000514e10 tar.zst:0x14000514e20 tbz2:0x14000514df0 tgz:0x14000514e00 txz:0x14000514e10 tzst:0x14000514e20 xz:0x14000514e50 zip:0x14000514e60 zst:0x14000514e58] Getters:map[file:0x14000888680 http:0x14000e1e230 https:0x14000e1e280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1209 03:21:56.380858    7821 out_reason.go:110] 
	W1209 03:21:56.388868    7821 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:21:56.392769    7821 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-118000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (21.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-476000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-476000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.920570916s)

                                                
                                                
-- stdout --
	* [offline-docker-476000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-476000" primary control-plane node in "offline-docker-476000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-476000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:32:24.476994    9541 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:32:24.477159    9541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:32:24.477163    9541 out.go:358] Setting ErrFile to fd 2...
	I1209 03:32:24.477165    9541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:32:24.477328    9541 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:32:24.479298    9541 out.go:352] Setting JSON to false
	I1209 03:32:24.501938    9541 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5515,"bootTime":1733738429,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:32:24.502028    9541 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:32:24.506583    9541 out.go:177] * [offline-docker-476000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:32:24.519585    9541 notify.go:220] Checking for updates...
	I1209 03:32:24.523421    9541 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:32:24.530519    9541 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:32:24.536493    9541 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:32:24.539491    9541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:32:24.542434    9541 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:32:24.548456    9541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:32:24.551838    9541 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:32:24.551902    9541 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:32:24.562419    9541 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:32:24.574473    9541 start.go:297] selected driver: qemu2
	I1209 03:32:24.574481    9541 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:32:24.574488    9541 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:32:24.577029    9541 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:32:24.581423    9541 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:32:24.585549    9541 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:32:24.585574    9541 cni.go:84] Creating CNI manager for ""
	I1209 03:32:24.585604    9541 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:32:24.585609    9541 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:32:24.585649    9541 start.go:340] cluster config:
	{Name:offline-docker-476000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:32:24.590767    9541 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:32:24.599464    9541 out.go:177] * Starting "offline-docker-476000" primary control-plane node in "offline-docker-476000" cluster
	I1209 03:32:24.611363    9541 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:32:24.611394    9541 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:32:24.611423    9541 cache.go:56] Caching tarball of preloaded images
	I1209 03:32:24.611503    9541 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:32:24.611509    9541 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:32:24.611574    9541 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/offline-docker-476000/config.json ...
	I1209 03:32:24.611586    9541 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/offline-docker-476000/config.json: {Name:mk3d94f58353792aab486b11bf32dc04fbdd1f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:32:24.611975    9541 start.go:360] acquireMachinesLock for offline-docker-476000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:32:24.612032    9541 start.go:364] duration metric: took 45µs to acquireMachinesLock for "offline-docker-476000"
	I1209 03:32:24.612045    9541 start.go:93] Provisioning new machine with config: &{Name:offline-docker-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:32:24.612090    9541 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:32:24.621473    9541 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 03:32:24.640029    9541 start.go:159] libmachine.API.Create for "offline-docker-476000" (driver="qemu2")
	I1209 03:32:24.640063    9541 client.go:168] LocalClient.Create starting
	I1209 03:32:24.640143    9541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:32:24.640185    9541 main.go:141] libmachine: Decoding PEM data...
	I1209 03:32:24.640201    9541 main.go:141] libmachine: Parsing certificate...
	I1209 03:32:24.640250    9541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:32:24.640283    9541 main.go:141] libmachine: Decoding PEM data...
	I1209 03:32:24.640290    9541 main.go:141] libmachine: Parsing certificate...
	I1209 03:32:24.640724    9541 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:32:24.801807    9541 main.go:141] libmachine: Creating SSH key...
	I1209 03:32:24.887641    9541 main.go:141] libmachine: Creating Disk image...
	I1209 03:32:24.887646    9541 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:32:24.887864    9541 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/disk.qcow2
	I1209 03:32:24.897734    9541 main.go:141] libmachine: STDOUT: 
	I1209 03:32:24.897760    9541 main.go:141] libmachine: STDERR: 
	I1209 03:32:24.897822    9541 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/disk.qcow2 +20000M
	I1209 03:32:24.906257    9541 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:32:24.906276    9541 main.go:141] libmachine: STDERR: 
	I1209 03:32:24.906290    9541 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/disk.qcow2
	I1209 03:32:24.906295    9541 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:32:24.906311    9541 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:32:24.906352    9541 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:32:09:34:63:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/disk.qcow2
	I1209 03:32:24.908121    9541 main.go:141] libmachine: STDOUT: 
	I1209 03:32:24.908135    9541 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:32:24.908156    9541 client.go:171] duration metric: took 268.09125ms to LocalClient.Create
	I1209 03:32:26.910318    9541 start.go:128] duration metric: took 2.298247s to createHost
	I1209 03:32:26.910373    9541 start.go:83] releasing machines lock for "offline-docker-476000", held for 2.298374542s
	W1209 03:32:26.910423    9541 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:32:26.939417    9541 out.go:177] * Deleting "offline-docker-476000" in qemu2 ...
	W1209 03:32:26.963704    9541 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:32:26.963726    9541 start.go:729] Will try again in 5 seconds ...
	I1209 03:32:31.965815    9541 start.go:360] acquireMachinesLock for offline-docker-476000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:32:31.966437    9541 start.go:364] duration metric: took 540.458µs to acquireMachinesLock for "offline-docker-476000"
	I1209 03:32:31.966578    9541 start.go:93] Provisioning new machine with config: &{Name:offline-docker-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:32:31.966846    9541 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:32:31.986623    9541 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 03:32:32.032247    9541 start.go:159] libmachine.API.Create for "offline-docker-476000" (driver="qemu2")
	I1209 03:32:32.032302    9541 client.go:168] LocalClient.Create starting
	I1209 03:32:32.032430    9541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:32:32.032507    9541 main.go:141] libmachine: Decoding PEM data...
	I1209 03:32:32.032523    9541 main.go:141] libmachine: Parsing certificate...
	I1209 03:32:32.032589    9541 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:32:32.032647    9541 main.go:141] libmachine: Decoding PEM data...
	I1209 03:32:32.032668    9541 main.go:141] libmachine: Parsing certificate...
	I1209 03:32:32.033389    9541 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:32:32.205522    9541 main.go:141] libmachine: Creating SSH key...
	I1209 03:32:32.286261    9541 main.go:141] libmachine: Creating Disk image...
	I1209 03:32:32.286269    9541 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:32:32.286486    9541 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/disk.qcow2
	I1209 03:32:32.296352    9541 main.go:141] libmachine: STDOUT: 
	I1209 03:32:32.296371    9541 main.go:141] libmachine: STDERR: 
	I1209 03:32:32.296436    9541 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/disk.qcow2 +20000M
	I1209 03:32:32.305001    9541 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:32:32.305018    9541 main.go:141] libmachine: STDERR: 
	I1209 03:32:32.305035    9541 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/disk.qcow2
	I1209 03:32:32.305040    9541 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:32:32.305049    9541 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:32:32.305083    9541 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:03:5a:0f:02:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/offline-docker-476000/disk.qcow2
	I1209 03:32:32.306839    9541 main.go:141] libmachine: STDOUT: 
	I1209 03:32:32.306854    9541 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:32:32.306867    9541 client.go:171] duration metric: took 274.564167ms to LocalClient.Create
	I1209 03:32:34.308998    9541 start.go:128] duration metric: took 2.3421675s to createHost
	I1209 03:32:34.309095    9541 start.go:83] releasing machines lock for "offline-docker-476000", held for 2.3426505s
	W1209 03:32:34.309494    9541 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-476000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-476000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:32:34.325237    9541 out.go:201] 
	W1209 03:32:34.330238    9541 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:32:34.330265    9541 out.go:270] * 
	* 
	W1209 03:32:34.333117    9541 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:32:34.347166    9541 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-476000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-12-09 03:32:34.363862 -0800 PST m=+659.494224626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-476000 -n offline-docker-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-476000 -n offline-docker-476000: exit status 7 (73.340875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-476000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-476000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-476000
--- FAIL: TestOffline (10.09s)

                                                
                                    
x
+
TestAddons/Setup (10.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-850000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-850000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (10.183641833s)

                                                
                                                
-- stdout --
	* [addons-850000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-850000" primary control-plane node in "addons-850000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-850000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:22:08.124230    7899 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:22:08.124390    7899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:22:08.124393    7899 out.go:358] Setting ErrFile to fd 2...
	I1209 03:22:08.124396    7899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:22:08.124530    7899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:22:08.125749    7899 out.go:352] Setting JSON to false
	I1209 03:22:08.143596    7899 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4899,"bootTime":1733738429,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:22:08.143671    7899 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:22:08.148846    7899 out.go:177] * [addons-850000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:22:08.155720    7899 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:22:08.155818    7899 notify.go:220] Checking for updates...
	I1209 03:22:08.162835    7899 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:22:08.165841    7899 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:22:08.168881    7899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:22:08.171853    7899 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:22:08.173128    7899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:22:08.175989    7899 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:22:08.179927    7899 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:22:08.184803    7899 start.go:297] selected driver: qemu2
	I1209 03:22:08.184811    7899 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:22:08.184818    7899 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:22:08.187303    7899 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:22:08.189891    7899 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:22:08.193707    7899 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:22:08.193723    7899 cni.go:84] Creating CNI manager for ""
	I1209 03:22:08.193744    7899 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:22:08.193749    7899 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:22:08.193788    7899 start.go:340] cluster config:
	{Name:addons-850000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:22:08.198523    7899 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:22:08.206848    7899 out.go:177] * Starting "addons-850000" primary control-plane node in "addons-850000" cluster
	I1209 03:22:08.210866    7899 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:22:08.210884    7899 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:22:08.210891    7899 cache.go:56] Caching tarball of preloaded images
	I1209 03:22:08.210988    7899 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:22:08.210994    7899 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:22:08.211201    7899 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/addons-850000/config.json ...
	I1209 03:22:08.211213    7899 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/addons-850000/config.json: {Name:mkf4244fd774bb6eb387dcbfd21cf3cf5e1327ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:22:08.211607    7899 start.go:360] acquireMachinesLock for addons-850000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:22:08.211695    7899 start.go:364] duration metric: took 82.875µs to acquireMachinesLock for "addons-850000"
	I1209 03:22:08.211707    7899 start.go:93] Provisioning new machine with config: &{Name:addons-850000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:22:08.211737    7899 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:22:08.220813    7899 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1209 03:22:08.238286    7899 start.go:159] libmachine.API.Create for "addons-850000" (driver="qemu2")
	I1209 03:22:08.238332    7899 client.go:168] LocalClient.Create starting
	I1209 03:22:08.238557    7899 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:22:08.339765    7899 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:22:08.484309    7899 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:22:08.685339    7899 main.go:141] libmachine: Creating SSH key...
	I1209 03:22:08.765725    7899 main.go:141] libmachine: Creating Disk image...
	I1209 03:22:08.765731    7899 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:22:08.765965    7899 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/disk.qcow2
	I1209 03:22:08.775974    7899 main.go:141] libmachine: STDOUT: 
	I1209 03:22:08.775990    7899 main.go:141] libmachine: STDERR: 
	I1209 03:22:08.776041    7899 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/disk.qcow2 +20000M
	I1209 03:22:08.784426    7899 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:22:08.784439    7899 main.go:141] libmachine: STDERR: 
	I1209 03:22:08.784448    7899 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/disk.qcow2
	I1209 03:22:08.784452    7899 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:22:08.784491    7899 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:22:08.784526    7899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:1d:f6:a7:12:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/disk.qcow2
	I1209 03:22:08.786243    7899 main.go:141] libmachine: STDOUT: 
	I1209 03:22:08.786259    7899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:22:08.786287    7899 client.go:171] duration metric: took 547.955292ms to LocalClient.Create
	I1209 03:22:10.788402    7899 start.go:128] duration metric: took 2.576704083s to createHost
	I1209 03:22:10.788533    7899 start.go:83] releasing machines lock for "addons-850000", held for 2.576870417s
	W1209 03:22:10.788613    7899 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:22:10.803858    7899 out.go:177] * Deleting "addons-850000" in qemu2 ...
	W1209 03:22:10.836721    7899 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:22:10.836765    7899 start.go:729] Will try again in 5 seconds ...
	I1209 03:22:15.838815    7899 start.go:360] acquireMachinesLock for addons-850000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:22:15.839413    7899 start.go:364] duration metric: took 509.333µs to acquireMachinesLock for "addons-850000"
	I1209 03:22:15.839538    7899 start.go:93] Provisioning new machine with config: &{Name:addons-850000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-850000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:22:15.839865    7899 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:22:15.859771    7899 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1209 03:22:15.909524    7899 start.go:159] libmachine.API.Create for "addons-850000" (driver="qemu2")
	I1209 03:22:15.909571    7899 client.go:168] LocalClient.Create starting
	I1209 03:22:15.909711    7899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:22:15.909780    7899 main.go:141] libmachine: Decoding PEM data...
	I1209 03:22:15.909797    7899 main.go:141] libmachine: Parsing certificate...
	I1209 03:22:15.909862    7899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:22:15.909918    7899 main.go:141] libmachine: Decoding PEM data...
	I1209 03:22:15.909929    7899 main.go:141] libmachine: Parsing certificate...
	I1209 03:22:15.910518    7899 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:22:16.082907    7899 main.go:141] libmachine: Creating SSH key...
	I1209 03:22:16.206537    7899 main.go:141] libmachine: Creating Disk image...
	I1209 03:22:16.206545    7899 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:22:16.206786    7899 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/disk.qcow2
	I1209 03:22:16.216665    7899 main.go:141] libmachine: STDOUT: 
	I1209 03:22:16.216684    7899 main.go:141] libmachine: STDERR: 
	I1209 03:22:16.216753    7899 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/disk.qcow2 +20000M
	I1209 03:22:16.225282    7899 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:22:16.225298    7899 main.go:141] libmachine: STDERR: 
	I1209 03:22:16.225311    7899 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/disk.qcow2
	I1209 03:22:16.225316    7899 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:22:16.225324    7899 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:22:16.225358    7899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:79:6e:ac:93:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/addons-850000/disk.qcow2
	I1209 03:22:16.227158    7899 main.go:141] libmachine: STDOUT: 
	I1209 03:22:16.227172    7899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:22:16.227186    7899 client.go:171] duration metric: took 317.618667ms to LocalClient.Create
	I1209 03:22:18.229278    7899 start.go:128] duration metric: took 2.389449375s to createHost
	I1209 03:22:18.229397    7899 start.go:83] releasing machines lock for "addons-850000", held for 2.389962875s
	W1209 03:22:18.229782    7899 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:22:18.239281    7899 out.go:201] 
	W1209 03:22:18.247433    7899 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:22:18.247465    7899 out.go:270] * 
	* 
	W1209 03:22:18.250006    7899 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:22:18.260182    7899 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-850000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (10.19s)

                                                
                                    
x
+
TestCertOptions (12.4s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-312000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-312000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (12.121567833s)

                                                
                                                
-- stdout --
	* [cert-options-312000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-312000" primary control-plane node in "cert-options-312000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-312000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-312000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-312000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-312000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-312000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (85.480625ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-312000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-312000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-312000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-312000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-312000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-312000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (45.601ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-312000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-312000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-312000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-312000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-312000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-12-09 03:44:04.602554 -0800 PST m=+1349.745849543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-312000 -n cert-options-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-312000 -n cert-options-312000: exit status 7 (33.534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-312000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-312000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-312000
--- FAIL: TestCertOptions (12.40s)

                                                
                                    
x
+
TestCertExpiration (197.87s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-299000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-299000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.487997875s)

                                                
                                                
-- stdout --
	* [cert-expiration-299000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-299000" primary control-plane node in "cert-expiration-299000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-299000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-299000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-299000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-299000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-299000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.23508925s)

                                                
                                                
-- stdout --
	* [cert-expiration-299000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-299000" primary control-plane node in "cert-expiration-299000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-299000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-299000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-299000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-299000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-299000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-299000" primary control-plane node in "cert-expiration-299000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-299000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-299000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-299000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-12-09 03:47:07.225198 -0800 PST m=+1532.371915126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-299000 -n cert-expiration-299000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-299000 -n cert-expiration-299000: exit status 7 (36.557042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-299000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-299000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-299000
--- FAIL: TestCertExpiration (197.87s)

                                                
                                    
x
+
TestDockerFlags (10.19s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-461000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-461000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.903300375s)

                                                
                                                
-- stdout --
	* [docker-flags-461000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-461000" primary control-plane node in "docker-flags-461000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-461000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:43:42.158235   10132 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:43:42.158401   10132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:43:42.158404   10132 out.go:358] Setting ErrFile to fd 2...
	I1209 03:43:42.158407   10132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:43:42.158535   10132 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:43:42.159784   10132 out.go:352] Setting JSON to false
	I1209 03:43:42.177874   10132 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6193,"bootTime":1733738429,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:43:42.177949   10132 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:43:42.184319   10132 out.go:177] * [docker-flags-461000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:43:42.192457   10132 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:43:42.192491   10132 notify.go:220] Checking for updates...
	I1209 03:43:42.201414   10132 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:43:42.204544   10132 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:43:42.208398   10132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:43:42.211412   10132 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:43:42.214425   10132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:43:42.217815   10132 config.go:182] Loaded profile config "force-systemd-flag-497000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:43:42.217883   10132 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:43:42.217925   10132 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:43:42.222410   10132 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:43:42.229423   10132 start.go:297] selected driver: qemu2
	I1209 03:43:42.229429   10132 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:43:42.229435   10132 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:43:42.231945   10132 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:43:42.236395   10132 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:43:42.239542   10132 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1209 03:43:42.239565   10132 cni.go:84] Creating CNI manager for ""
	I1209 03:43:42.239594   10132 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:43:42.239601   10132 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:43:42.239660   10132 start.go:340] cluster config:
	{Name:docker-flags-461000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-461000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:43:42.244153   10132 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:43:42.252438   10132 out.go:177] * Starting "docker-flags-461000" primary control-plane node in "docker-flags-461000" cluster
	I1209 03:43:42.256330   10132 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:43:42.256343   10132 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:43:42.256350   10132 cache.go:56] Caching tarball of preloaded images
	I1209 03:43:42.256420   10132 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:43:42.256425   10132 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:43:42.256483   10132 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/docker-flags-461000/config.json ...
	I1209 03:43:42.256494   10132 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/docker-flags-461000/config.json: {Name:mkda8aef423e18fe534a233e9861cb7e9b1014ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:43:42.256882   10132 start.go:360] acquireMachinesLock for docker-flags-461000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:43:42.256935   10132 start.go:364] duration metric: took 38.583µs to acquireMachinesLock for "docker-flags-461000"
	I1209 03:43:42.256945   10132 start.go:93] Provisioning new machine with config: &{Name:docker-flags-461000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-461000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:43:42.256973   10132 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:43:42.261436   10132 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 03:43:42.277010   10132 start.go:159] libmachine.API.Create for "docker-flags-461000" (driver="qemu2")
	I1209 03:43:42.277032   10132 client.go:168] LocalClient.Create starting
	I1209 03:43:42.277115   10132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:43:42.277154   10132 main.go:141] libmachine: Decoding PEM data...
	I1209 03:43:42.277168   10132 main.go:141] libmachine: Parsing certificate...
	I1209 03:43:42.277200   10132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:43:42.277229   10132 main.go:141] libmachine: Decoding PEM data...
	I1209 03:43:42.277236   10132 main.go:141] libmachine: Parsing certificate...
	I1209 03:43:42.277685   10132 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:43:42.438827   10132 main.go:141] libmachine: Creating SSH key...
	I1209 03:43:42.497175   10132 main.go:141] libmachine: Creating Disk image...
	I1209 03:43:42.497180   10132 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:43:42.497408   10132 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/disk.qcow2
	I1209 03:43:42.507114   10132 main.go:141] libmachine: STDOUT: 
	I1209 03:43:42.507139   10132 main.go:141] libmachine: STDERR: 
	I1209 03:43:42.507198   10132 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/disk.qcow2 +20000M
	I1209 03:43:42.515800   10132 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:43:42.515819   10132 main.go:141] libmachine: STDERR: 
	I1209 03:43:42.515833   10132 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/disk.qcow2
	I1209 03:43:42.515837   10132 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:43:42.515848   10132 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:43:42.515887   10132 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:da:67:93:4b:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/disk.qcow2
	I1209 03:43:42.517718   10132 main.go:141] libmachine: STDOUT: 
	I1209 03:43:42.517731   10132 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:43:42.517749   10132 client.go:171] duration metric: took 240.71725ms to LocalClient.Create
	I1209 03:43:44.519872   10132 start.go:128] duration metric: took 2.262919791s to createHost
	I1209 03:43:44.519941   10132 start.go:83] releasing machines lock for "docker-flags-461000", held for 2.263039208s
	W1209 03:43:44.519987   10132 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:43:44.531605   10132 out.go:177] * Deleting "docker-flags-461000" in qemu2 ...
	W1209 03:43:44.566663   10132 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:43:44.566688   10132 start.go:729] Will try again in 5 seconds ...
	I1209 03:43:49.568633   10132 start.go:360] acquireMachinesLock for docker-flags-461000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:43:49.568713   10132 start.go:364] duration metric: took 67.292µs to acquireMachinesLock for "docker-flags-461000"
	I1209 03:43:49.568728   10132 start.go:93] Provisioning new machine with config: &{Name:docker-flags-461000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-461000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:43:49.568779   10132 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:43:49.580171   10132 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 03:43:49.595752   10132 start.go:159] libmachine.API.Create for "docker-flags-461000" (driver="qemu2")
	I1209 03:43:49.595785   10132 client.go:168] LocalClient.Create starting
	I1209 03:43:49.595854   10132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:43:49.595899   10132 main.go:141] libmachine: Decoding PEM data...
	I1209 03:43:49.595908   10132 main.go:141] libmachine: Parsing certificate...
	I1209 03:43:49.595941   10132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:43:49.595969   10132 main.go:141] libmachine: Decoding PEM data...
	I1209 03:43:49.595977   10132 main.go:141] libmachine: Parsing certificate...
	I1209 03:43:49.596861   10132 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:43:49.799861   10132 main.go:141] libmachine: Creating SSH key...
	I1209 03:43:49.958783   10132 main.go:141] libmachine: Creating Disk image...
	I1209 03:43:49.958791   10132 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:43:49.959073   10132 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/disk.qcow2
	I1209 03:43:49.969497   10132 main.go:141] libmachine: STDOUT: 
	I1209 03:43:49.969516   10132 main.go:141] libmachine: STDERR: 
	I1209 03:43:49.969577   10132 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/disk.qcow2 +20000M
	I1209 03:43:49.978078   10132 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:43:49.978096   10132 main.go:141] libmachine: STDERR: 
	I1209 03:43:49.978115   10132 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/disk.qcow2
	I1209 03:43:49.978129   10132 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:43:49.978142   10132 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:43:49.978170   10132 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:1d:cc:4a:c3:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/docker-flags-461000/disk.qcow2
	I1209 03:43:49.980061   10132 main.go:141] libmachine: STDOUT: 
	I1209 03:43:49.980074   10132 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:43:49.980086   10132 client.go:171] duration metric: took 384.304458ms to LocalClient.Create
	I1209 03:43:51.982216   10132 start.go:128] duration metric: took 2.413455958s to createHost
	I1209 03:43:51.982410   10132 start.go:83] releasing machines lock for "docker-flags-461000", held for 2.413639583s
	W1209 03:43:51.982745   10132 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-461000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-461000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:43:52.001400   10132 out.go:201] 
	W1209 03:43:52.009449   10132 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:43:52.009490   10132 out.go:270] * 
	* 
	W1209 03:43:52.011734   10132 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:43:52.021301   10132 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-461000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-461000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-461000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (77.452083ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-461000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-461000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-461000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-461000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-461000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-461000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-461000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-461000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-461000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (69.722667ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-461000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-461000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-461000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-461000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-461000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-461000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-12-09 03:43:52.179491 -0800 PST m=+1337.322553376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-461000 -n docker-flags-461000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-461000 -n docker-flags-461000: exit status 7 (40.686375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-461000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-461000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-461000
--- FAIL: TestDockerFlags (10.19s)

                                                
                                    
x
+
TestForceSystemdFlag (10.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-497000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-497000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.818047s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-497000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-497000" primary control-plane node in "force-systemd-flag-497000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-497000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:43:39.510093   10114 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:43:39.510255   10114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:43:39.510258   10114 out.go:358] Setting ErrFile to fd 2...
	I1209 03:43:39.510261   10114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:43:39.510390   10114 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:43:39.511466   10114 out.go:352] Setting JSON to false
	I1209 03:43:39.529461   10114 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6190,"bootTime":1733738429,"procs":548,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:43:39.529539   10114 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:43:39.536739   10114 out.go:177] * [force-systemd-flag-497000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:43:39.545570   10114 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:43:39.545630   10114 notify.go:220] Checking for updates...
	I1209 03:43:39.554530   10114 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:43:39.557589   10114 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:43:39.558941   10114 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:43:39.562528   10114 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:43:39.565551   10114 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:43:39.568884   10114 config.go:182] Loaded profile config "NoKubernetes-797000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1209 03:43:39.568965   10114 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:43:39.569019   10114 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:43:39.572484   10114 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:43:39.579539   10114 start.go:297] selected driver: qemu2
	I1209 03:43:39.579545   10114 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:43:39.579553   10114 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:43:39.582071   10114 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:43:39.586516   10114 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:43:39.589611   10114 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 03:43:39.589631   10114 cni.go:84] Creating CNI manager for ""
	I1209 03:43:39.589656   10114 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:43:39.589660   10114 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:43:39.589696   10114 start.go:340] cluster config:
	{Name:force-systemd-flag-497000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:43:39.594445   10114 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:43:39.602632   10114 out.go:177] * Starting "force-systemd-flag-497000" primary control-plane node in "force-systemd-flag-497000" cluster
	I1209 03:43:39.606562   10114 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:43:39.606578   10114 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:43:39.606595   10114 cache.go:56] Caching tarball of preloaded images
	I1209 03:43:39.606675   10114 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:43:39.606681   10114 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:43:39.606740   10114 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/force-systemd-flag-497000/config.json ...
	I1209 03:43:39.606752   10114 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/force-systemd-flag-497000/config.json: {Name:mk17068b1aaf76530274f964535e8ea9e4f798fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:43:39.607206   10114 start.go:360] acquireMachinesLock for force-systemd-flag-497000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:43:39.607259   10114 start.go:364] duration metric: took 41.875µs to acquireMachinesLock for "force-systemd-flag-497000"
	I1209 03:43:39.607270   10114 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-497000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:43:39.607301   10114 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:43:39.614604   10114 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 03:43:39.631494   10114 start.go:159] libmachine.API.Create for "force-systemd-flag-497000" (driver="qemu2")
	I1209 03:43:39.631524   10114 client.go:168] LocalClient.Create starting
	I1209 03:43:39.631599   10114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:43:39.631639   10114 main.go:141] libmachine: Decoding PEM data...
	I1209 03:43:39.631648   10114 main.go:141] libmachine: Parsing certificate...
	I1209 03:43:39.631686   10114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:43:39.631718   10114 main.go:141] libmachine: Decoding PEM data...
	I1209 03:43:39.631728   10114 main.go:141] libmachine: Parsing certificate...
	I1209 03:43:39.632175   10114 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:43:39.792533   10114 main.go:141] libmachine: Creating SSH key...
	I1209 03:43:39.821947   10114 main.go:141] libmachine: Creating Disk image...
	I1209 03:43:39.821953   10114 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:43:39.822195   10114 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/disk.qcow2
	I1209 03:43:39.832120   10114 main.go:141] libmachine: STDOUT: 
	I1209 03:43:39.832141   10114 main.go:141] libmachine: STDERR: 
	I1209 03:43:39.832202   10114 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/disk.qcow2 +20000M
	I1209 03:43:39.840661   10114 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:43:39.840676   10114 main.go:141] libmachine: STDERR: 
	I1209 03:43:39.840690   10114 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/disk.qcow2
	I1209 03:43:39.840695   10114 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:43:39.840707   10114 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:43:39.840751   10114 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:62:7b:6a:64:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/disk.qcow2
	I1209 03:43:39.842574   10114 main.go:141] libmachine: STDOUT: 
	I1209 03:43:39.842588   10114 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:43:39.842608   10114 client.go:171] duration metric: took 211.081083ms to LocalClient.Create
	I1209 03:43:41.844741   10114 start.go:128] duration metric: took 2.237461792s to createHost
	I1209 03:43:41.844794   10114 start.go:83] releasing machines lock for "force-systemd-flag-497000", held for 2.237568708s
	W1209 03:43:41.844938   10114 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:43:41.868401   10114 out.go:177] * Deleting "force-systemd-flag-497000" in qemu2 ...
	W1209 03:43:41.926894   10114 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:43:41.926922   10114 start.go:729] Will try again in 5 seconds ...
	I1209 03:43:46.929010   10114 start.go:360] acquireMachinesLock for force-systemd-flag-497000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:43:46.929630   10114 start.go:364] duration metric: took 508.083µs to acquireMachinesLock for "force-systemd-flag-497000"
	I1209 03:43:46.929796   10114 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-497000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:43:46.930090   10114 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:43:46.939712   10114 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 03:43:46.989192   10114 start.go:159] libmachine.API.Create for "force-systemd-flag-497000" (driver="qemu2")
	I1209 03:43:46.989252   10114 client.go:168] LocalClient.Create starting
	I1209 03:43:46.989392   10114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:43:46.989475   10114 main.go:141] libmachine: Decoding PEM data...
	I1209 03:43:46.989495   10114 main.go:141] libmachine: Parsing certificate...
	I1209 03:43:46.989561   10114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:43:46.989627   10114 main.go:141] libmachine: Decoding PEM data...
	I1209 03:43:46.989638   10114 main.go:141] libmachine: Parsing certificate...
	I1209 03:43:46.995213   10114 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:43:47.185922   10114 main.go:141] libmachine: Creating SSH key...
	I1209 03:43:47.225111   10114 main.go:141] libmachine: Creating Disk image...
	I1209 03:43:47.225117   10114 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:43:47.225357   10114 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/disk.qcow2
	I1209 03:43:47.235572   10114 main.go:141] libmachine: STDOUT: 
	I1209 03:43:47.235589   10114 main.go:141] libmachine: STDERR: 
	I1209 03:43:47.235666   10114 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/disk.qcow2 +20000M
	I1209 03:43:47.244112   10114 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:43:47.244135   10114 main.go:141] libmachine: STDERR: 
	I1209 03:43:47.244149   10114 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/disk.qcow2
	I1209 03:43:47.244154   10114 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:43:47.244162   10114 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:43:47.244198   10114 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:24:1e:ff:65:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-flag-497000/disk.qcow2
	I1209 03:43:47.246038   10114 main.go:141] libmachine: STDOUT: 
	I1209 03:43:47.246051   10114 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:43:47.246063   10114 client.go:171] duration metric: took 256.811417ms to LocalClient.Create
	I1209 03:43:49.248196   10114 start.go:128] duration metric: took 2.318114667s to createHost
	I1209 03:43:49.248251   10114 start.go:83] releasing machines lock for "force-systemd-flag-497000", held for 2.3186195s
	W1209 03:43:49.248658   10114 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-497000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-497000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:43:49.258222   10114 out.go:201] 
	W1209 03:43:49.267549   10114 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:43:49.267585   10114 out.go:270] * 
	* 
	W1209 03:43:49.270221   10114 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:43:49.281368   10114 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-497000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-497000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-497000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (83.36275ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-497000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-497000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-497000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-12-09 03:43:49.383246 -0800 PST m=+1334.526256376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-497000 -n force-systemd-flag-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-497000 -n force-systemd-flag-497000: exit status 7 (36.555042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-497000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-497000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-497000
--- FAIL: TestForceSystemdFlag (10.02s)

                                                
                                    
x
+
TestForceSystemdEnv (10.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-177000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-177000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.842856958s)

                                                
                                                
-- stdout --
	* [force-systemd-env-177000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-177000" primary control-plane node in "force-systemd-env-177000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-177000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:43:29.464777   10061 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:43:29.464926   10061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:43:29.464930   10061 out.go:358] Setting ErrFile to fd 2...
	I1209 03:43:29.464938   10061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:43:29.465079   10061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:43:29.466179   10061 out.go:352] Setting JSON to false
	I1209 03:43:29.483776   10061 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6180,"bootTime":1733738429,"procs":546,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:43:29.483853   10061 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:43:29.489973   10061 out.go:177] * [force-systemd-env-177000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:43:29.496799   10061 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:43:29.496836   10061 notify.go:220] Checking for updates...
	I1209 03:43:29.503748   10061 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:43:29.507782   10061 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:43:29.510805   10061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:43:29.513778   10061 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:43:29.516788   10061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1209 03:43:29.520088   10061 config.go:182] Loaded profile config "NoKubernetes-797000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1209 03:43:29.520165   10061 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:43:29.520208   10061 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:43:29.523748   10061 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:43:29.530832   10061 start.go:297] selected driver: qemu2
	I1209 03:43:29.530839   10061 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:43:29.530848   10061 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:43:29.533341   10061 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:43:29.536746   10061 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:43:29.539848   10061 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 03:43:29.539867   10061 cni.go:84] Creating CNI manager for ""
	I1209 03:43:29.539891   10061 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:43:29.539902   10061 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:43:29.539943   10061 start.go:340] cluster config:
	{Name:force-systemd-env-177000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-177000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:43:29.544551   10061 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:43:29.552815   10061 out.go:177] * Starting "force-systemd-env-177000" primary control-plane node in "force-systemd-env-177000" cluster
	I1209 03:43:29.556771   10061 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:43:29.556789   10061 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:43:29.556798   10061 cache.go:56] Caching tarball of preloaded images
	I1209 03:43:29.556891   10061 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:43:29.556896   10061 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:43:29.556951   10061 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/force-systemd-env-177000/config.json ...
	I1209 03:43:29.556962   10061 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/force-systemd-env-177000/config.json: {Name:mk0f1d016580b94af1843e9578d15da79b33d78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:43:29.557389   10061 start.go:360] acquireMachinesLock for force-systemd-env-177000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:43:29.557439   10061 start.go:364] duration metric: took 41.542µs to acquireMachinesLock for "force-systemd-env-177000"
	I1209 03:43:29.557450   10061 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-177000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-177000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:43:29.557481   10061 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:43:29.565772   10061 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 03:43:29.583145   10061 start.go:159] libmachine.API.Create for "force-systemd-env-177000" (driver="qemu2")
	I1209 03:43:29.583180   10061 client.go:168] LocalClient.Create starting
	I1209 03:43:29.583260   10061 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:43:29.583299   10061 main.go:141] libmachine: Decoding PEM data...
	I1209 03:43:29.583309   10061 main.go:141] libmachine: Parsing certificate...
	I1209 03:43:29.583354   10061 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:43:29.583386   10061 main.go:141] libmachine: Decoding PEM data...
	I1209 03:43:29.583397   10061 main.go:141] libmachine: Parsing certificate...
	I1209 03:43:29.583802   10061 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:43:29.744517   10061 main.go:141] libmachine: Creating SSH key...
	I1209 03:43:29.812944   10061 main.go:141] libmachine: Creating Disk image...
	I1209 03:43:29.812953   10061 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:43:29.813184   10061 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/disk.qcow2
	I1209 03:43:29.823269   10061 main.go:141] libmachine: STDOUT: 
	I1209 03:43:29.823295   10061 main.go:141] libmachine: STDERR: 
	I1209 03:43:29.823350   10061 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/disk.qcow2 +20000M
	I1209 03:43:29.831789   10061 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:43:29.831803   10061 main.go:141] libmachine: STDERR: 
	I1209 03:43:29.831824   10061 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/disk.qcow2
	I1209 03:43:29.831829   10061 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:43:29.831841   10061 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:43:29.831870   10061 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ea:73:b2:91:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/disk.qcow2
	I1209 03:43:29.833683   10061 main.go:141] libmachine: STDOUT: 
	I1209 03:43:29.833695   10061 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:43:29.833720   10061 client.go:171] duration metric: took 250.539166ms to LocalClient.Create
	I1209 03:43:31.835861   10061 start.go:128] duration metric: took 2.278402042s to createHost
	I1209 03:43:31.835923   10061 start.go:83] releasing machines lock for "force-systemd-env-177000", held for 2.278517042s
	W1209 03:43:31.836061   10061 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:43:31.852254   10061 out.go:177] * Deleting "force-systemd-env-177000" in qemu2 ...
	W1209 03:43:31.881313   10061 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:43:31.881332   10061 start.go:729] Will try again in 5 seconds ...
	I1209 03:43:36.883444   10061 start.go:360] acquireMachinesLock for force-systemd-env-177000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:43:36.883865   10061 start.go:364] duration metric: took 347.917µs to acquireMachinesLock for "force-systemd-env-177000"
	I1209 03:43:36.883991   10061 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-177000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-177000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:43:36.884220   10061 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:43:36.889692   10061 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 03:43:36.941128   10061 start.go:159] libmachine.API.Create for "force-systemd-env-177000" (driver="qemu2")
	I1209 03:43:36.941182   10061 client.go:168] LocalClient.Create starting
	I1209 03:43:36.941325   10061 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:43:36.941418   10061 main.go:141] libmachine: Decoding PEM data...
	I1209 03:43:36.941433   10061 main.go:141] libmachine: Parsing certificate...
	I1209 03:43:36.941491   10061 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:43:36.941550   10061 main.go:141] libmachine: Decoding PEM data...
	I1209 03:43:36.941562   10061 main.go:141] libmachine: Parsing certificate...
	I1209 03:43:36.942198   10061 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:43:37.116464   10061 main.go:141] libmachine: Creating SSH key...
	I1209 03:43:37.204433   10061 main.go:141] libmachine: Creating Disk image...
	I1209 03:43:37.204443   10061 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:43:37.204672   10061 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/disk.qcow2
	I1209 03:43:37.214731   10061 main.go:141] libmachine: STDOUT: 
	I1209 03:43:37.214756   10061 main.go:141] libmachine: STDERR: 
	I1209 03:43:37.214820   10061 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/disk.qcow2 +20000M
	I1209 03:43:37.223372   10061 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:43:37.223388   10061 main.go:141] libmachine: STDERR: 
	I1209 03:43:37.223406   10061 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/disk.qcow2
	I1209 03:43:37.223412   10061 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:43:37.223428   10061 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:43:37.223454   10061 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:44:6d:bc:7d:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/force-systemd-env-177000/disk.qcow2
	I1209 03:43:37.225208   10061 main.go:141] libmachine: STDOUT: 
	I1209 03:43:37.225221   10061 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:43:37.225235   10061 client.go:171] duration metric: took 284.052958ms to LocalClient.Create
	I1209 03:43:39.227371   10061 start.go:128] duration metric: took 2.343165792s to createHost
	I1209 03:43:39.227489   10061 start.go:83] releasing machines lock for "force-systemd-env-177000", held for 2.343581167s
	W1209 03:43:39.227938   10061 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-177000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-177000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:43:39.241647   10061 out.go:201] 
	W1209 03:43:39.245701   10061 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:43:39.245729   10061 out.go:270] * 
	* 
	W1209 03:43:39.248174   10061 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:43:39.261551   10061 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-177000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-177000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-177000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.486458ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-177000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-177000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-177000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-12-09 03:43:39.359671 -0800 PST m=+1324.502493126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-177000 -n force-systemd-env-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-177000 -n force-systemd-env-177000: exit status 7 (36.375542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-177000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-177000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-177000
--- FAIL: TestForceSystemdEnv (10.05s)

                                                
                                    
x
+
TestErrorSpam/setup (9.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-251000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-251000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 --driver=qemu2 : exit status 80 (9.873385167s)

                                                
                                                
-- stdout --
	* [nospam-251000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-251000" primary control-plane node in "nospam-251000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-251000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-251000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-251000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-251000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-251000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=20068
- KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-251000" primary control-plane node in "nospam-251000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-251000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-251000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.88s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-174000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-174000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.986213208s)

                                                
                                                
-- stdout --
	* [functional-174000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-174000" primary control-plane node in "functional-174000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-174000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:60344 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:60344 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:60344 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-174000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-174000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-174000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=20068
- KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-174000" primary control-plane node in "functional-174000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-174000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:60344 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:60344 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:60344 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-174000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (75.191916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.06s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 03:22:49.291007    7820 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-174000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-174000 --alsologtostderr -v=8: exit status 80 (5.195952959s)

                                                
                                                
-- stdout --
	* [functional-174000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-174000" primary control-plane node in "functional-174000" cluster
	* Restarting existing qemu2 VM for "functional-174000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-174000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:22:49.323767    8043 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:22:49.323927    8043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:22:49.323930    8043 out.go:358] Setting ErrFile to fd 2...
	I1209 03:22:49.323932    8043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:22:49.324046    8043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:22:49.325077    8043 out.go:352] Setting JSON to false
	I1209 03:22:49.342828    8043 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4940,"bootTime":1733738429,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:22:49.342907    8043 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:22:49.348574    8043 out.go:177] * [functional-174000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:22:49.356544    8043 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:22:49.356572    8043 notify.go:220] Checking for updates...
	I1209 03:22:49.364446    8043 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:22:49.368487    8043 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:22:49.372460    8043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:22:49.375458    8043 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:22:49.378495    8043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:22:49.381644    8043 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:22:49.381690    8043 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:22:49.386448    8043 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:22:49.393394    8043 start.go:297] selected driver: qemu2
	I1209 03:22:49.393399    8043 start.go:901] validating driver "qemu2" against &{Name:functional-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:22:49.393447    8043 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:22:49.396110    8043 cni.go:84] Creating CNI manager for ""
	I1209 03:22:49.396155    8043 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:22:49.396208    8043 start.go:340] cluster config:
	{Name:functional-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-174000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:22:49.400730    8043 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:22:49.407382    8043 out.go:177] * Starting "functional-174000" primary control-plane node in "functional-174000" cluster
	I1209 03:22:49.411450    8043 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:22:49.411467    8043 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:22:49.411480    8043 cache.go:56] Caching tarball of preloaded images
	I1209 03:22:49.411572    8043 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:22:49.411579    8043 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:22:49.411639    8043 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/functional-174000/config.json ...
	I1209 03:22:49.412145    8043 start.go:360] acquireMachinesLock for functional-174000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:22:49.412179    8043 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "functional-174000"
	I1209 03:22:49.412187    8043 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:22:49.412193    8043 fix.go:54] fixHost starting: 
	I1209 03:22:49.412311    8043 fix.go:112] recreateIfNeeded on functional-174000: state=Stopped err=<nil>
	W1209 03:22:49.412319    8043 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:22:49.419455    8043 out.go:177] * Restarting existing qemu2 VM for "functional-174000" ...
	I1209 03:22:49.423490    8043 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:22:49.423542    8043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:b5:19:69:a0:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/disk.qcow2
	I1209 03:22:49.425848    8043 main.go:141] libmachine: STDOUT: 
	I1209 03:22:49.425879    8043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:22:49.425911    8043 fix.go:56] duration metric: took 13.717959ms for fixHost
	I1209 03:22:49.425916    8043 start.go:83] releasing machines lock for "functional-174000", held for 13.732958ms
	W1209 03:22:49.425922    8043 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:22:49.425958    8043 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:22:49.425963    8043 start.go:729] Will try again in 5 seconds ...
	I1209 03:22:54.428038    8043 start.go:360] acquireMachinesLock for functional-174000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:22:54.428638    8043 start.go:364] duration metric: took 431.334µs to acquireMachinesLock for "functional-174000"
	I1209 03:22:54.428792    8043 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:22:54.428813    8043 fix.go:54] fixHost starting: 
	I1209 03:22:54.429650    8043 fix.go:112] recreateIfNeeded on functional-174000: state=Stopped err=<nil>
	W1209 03:22:54.429682    8043 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:22:54.438080    8043 out.go:177] * Restarting existing qemu2 VM for "functional-174000" ...
	I1209 03:22:54.442152    8043 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:22:54.442417    8043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:b5:19:69:a0:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/disk.qcow2
	I1209 03:22:54.452840    8043 main.go:141] libmachine: STDOUT: 
	I1209 03:22:54.452893    8043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:22:54.452977    8043 fix.go:56] duration metric: took 24.166084ms for fixHost
	I1209 03:22:54.452994    8043 start.go:83] releasing machines lock for "functional-174000", held for 24.306375ms
	W1209 03:22:54.453165    8043 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-174000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-174000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:22:54.460109    8043 out.go:201] 
	W1209 03:22:54.464093    8043 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:22:54.464146    8043 out.go:270] * 
	* 
	W1209 03:22:54.466676    8043 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:22:54.474078    8043 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-174000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.197543209s for "functional-174000" cluster.
I1209 03:22:54.488923    7820 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (74.278334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (28.33425ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-174000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (35.285542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-174000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-174000 get po -A: exit status 1 (26.755375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-174000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-174000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-174000\n"*: args "kubectl --context functional-174000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-174000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (34.673542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh sudo crictl images: exit status 83 (45.921458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-174000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (51.860833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-174000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (45.934416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.951125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-174000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 kubectl -- --context functional-174000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 kubectl -- --context functional-174000 get pods: exit status 1 (681.862666ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-174000
	* no server found for cluster "functional-174000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-174000 kubectl -- --context functional-174000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (36.263375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.72s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-174000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-174000 get pods: exit status 1 (1.16637875s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-174000
	* no server found for cluster "functional-174000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-174000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (33.3205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.20s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-174000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-174000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.189427041s)

                                                
                                                
-- stdout --
	* [functional-174000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-174000" primary control-plane node in "functional-174000" cluster
	* Restarting existing qemu2 VM for "functional-174000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-174000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-174000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-174000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.189875292s for "functional-174000" cluster.
I1209 03:23:05.199546    7820 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (75.178084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-174000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-174000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.264833ms)

                                                
                                                
** stderr ** 
	error: context "functional-174000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-174000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (35.026083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 logs: exit status 83 (80.855667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-118000 | jenkins | v1.34.0 | 09 Dec 24 03:21 PST |                     |
	|         | -p download-only-118000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:21 PST | 09 Dec 24 03:21 PST |
	| delete  | -p download-only-118000                                                  | download-only-118000 | jenkins | v1.34.0 | 09 Dec 24 03:21 PST | 09 Dec 24 03:21 PST |
	| start   | -o=json --download-only                                                  | download-only-912000 | jenkins | v1.34.0 | 09 Dec 24 03:21 PST |                     |
	|         | -p download-only-912000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	| delete  | -p download-only-912000                                                  | download-only-912000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	| delete  | -p download-only-118000                                                  | download-only-118000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	| delete  | -p download-only-912000                                                  | download-only-912000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	| start   | --download-only -p                                                       | binary-mirror-952000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | binary-mirror-952000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:60309                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-952000                                                  | binary-mirror-952000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	| addons  | enable dashboard -p                                                      | addons-850000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | addons-850000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-850000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | addons-850000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-850000 --wait=true                                             | addons-850000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	| delete  | -p addons-850000                                                         | addons-850000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	| start   | -p nospam-251000 -n=1 --memory=2250 --wait=false                         | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-251000                                                         | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	| start   | -p functional-174000                                                     | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-174000                                                     | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-174000 cache add                                              | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-174000 cache add                                              | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-174000 cache add                                              | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-174000 cache add                                              | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	|         | minikube-local-cache-test:functional-174000                              |                      |         |         |                     |                     |
	| cache   | functional-174000 cache delete                                           | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	|         | minikube-local-cache-test:functional-174000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	| ssh     | functional-174000 ssh sudo                                               | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-174000                                                        | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-174000 ssh                                                    | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-174000 cache reload                                           | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	| ssh     | functional-174000 ssh                                                    | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-174000 kubectl --                                             | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
	|         | --context functional-174000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-174000                                                     | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:23 PST |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 03:23:00
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 03:23:00.040324    8124 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:23:00.040471    8124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:23:00.040473    8124 out.go:358] Setting ErrFile to fd 2...
	I1209 03:23:00.040475    8124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:23:00.040594    8124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:23:00.041772    8124 out.go:352] Setting JSON to false
	I1209 03:23:00.059173    8124 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4951,"bootTime":1733738429,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:23:00.059275    8124 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:23:00.064406    8124 out.go:177] * [functional-174000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:23:00.073346    8124 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:23:00.073377    8124 notify.go:220] Checking for updates...
	I1209 03:23:00.081317    8124 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:23:00.084373    8124 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:23:00.087418    8124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:23:00.090344    8124 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:23:00.093375    8124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:23:00.096596    8124 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:23:00.096650    8124 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:23:00.101296    8124 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:23:00.108237    8124 start.go:297] selected driver: qemu2
	I1209 03:23:00.108241    8124 start.go:901] validating driver "qemu2" against &{Name:functional-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:23:00.108279    8124 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:23:00.110818    8124 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:23:00.110839    8124 cni.go:84] Creating CNI manager for ""
	I1209 03:23:00.110862    8124 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:23:00.110908    8124 start.go:340] cluster config:
	{Name:functional-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-174000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:23:00.115614    8124 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:23:00.123203    8124 out.go:177] * Starting "functional-174000" primary control-plane node in "functional-174000" cluster
	I1209 03:23:00.127313    8124 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:23:00.127328    8124 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:23:00.127339    8124 cache.go:56] Caching tarball of preloaded images
	I1209 03:23:00.127416    8124 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:23:00.127421    8124 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:23:00.127475    8124 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/functional-174000/config.json ...
	I1209 03:23:00.127960    8124 start.go:360] acquireMachinesLock for functional-174000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:23:00.128010    8124 start.go:364] duration metric: took 45.334µs to acquireMachinesLock for "functional-174000"
	I1209 03:23:00.128018    8124 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:23:00.128021    8124 fix.go:54] fixHost starting: 
	I1209 03:23:00.128140    8124 fix.go:112] recreateIfNeeded on functional-174000: state=Stopped err=<nil>
	W1209 03:23:00.128147    8124 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:23:00.134324    8124 out.go:177] * Restarting existing qemu2 VM for "functional-174000" ...
	I1209 03:23:00.138294    8124 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:23:00.138326    8124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:b5:19:69:a0:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/disk.qcow2
	I1209 03:23:00.140546    8124 main.go:141] libmachine: STDOUT: 
	I1209 03:23:00.140561    8124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:23:00.140593    8124 fix.go:56] duration metric: took 12.5715ms for fixHost
	I1209 03:23:00.140596    8124 start.go:83] releasing machines lock for "functional-174000", held for 12.582709ms
	W1209 03:23:00.140600    8124 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:23:00.140642    8124 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:23:00.140648    8124 start.go:729] Will try again in 5 seconds ...
	I1209 03:23:05.142842    8124 start.go:360] acquireMachinesLock for functional-174000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:23:05.143380    8124 start.go:364] duration metric: took 445.75µs to acquireMachinesLock for "functional-174000"
	I1209 03:23:05.143587    8124 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:23:05.143599    8124 fix.go:54] fixHost starting: 
	I1209 03:23:05.144300    8124 fix.go:112] recreateIfNeeded on functional-174000: state=Stopped err=<nil>
	W1209 03:23:05.144318    8124 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:23:05.147857    8124 out.go:177] * Restarting existing qemu2 VM for "functional-174000" ...
	I1209 03:23:05.154857    8124 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:23:05.155049    8124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:b5:19:69:a0:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/disk.qcow2
	I1209 03:23:05.165477    8124 main.go:141] libmachine: STDOUT: 
	I1209 03:23:05.165530    8124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:23:05.165609    8124 fix.go:56] duration metric: took 22.013042ms for fixHost
	I1209 03:23:05.165621    8124 start.go:83] releasing machines lock for "functional-174000", held for 22.2165ms
	W1209 03:23:05.165785    8124 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-174000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:23:05.172786    8124 out.go:201] 
	W1209 03:23:05.175798    8124 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:23:05.175816    8124 out.go:270] * 
	W1209 03:23:05.177926    8124 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:23:05.186811    8124 out.go:201] 
	
	
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-174000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-118000 | jenkins | v1.34.0 | 09 Dec 24 03:21 PST |                     |
|         | -p download-only-118000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:21 PST | 09 Dec 24 03:21 PST |
| delete  | -p download-only-118000                                                  | download-only-118000 | jenkins | v1.34.0 | 09 Dec 24 03:21 PST | 09 Dec 24 03:21 PST |
| start   | -o=json --download-only                                                  | download-only-912000 | jenkins | v1.34.0 | 09 Dec 24 03:21 PST |                     |
|         | -p download-only-912000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| delete  | -p download-only-912000                                                  | download-only-912000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| delete  | -p download-only-118000                                                  | download-only-118000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| delete  | -p download-only-912000                                                  | download-only-912000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| start   | --download-only -p                                                       | binary-mirror-952000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | binary-mirror-952000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:60309                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-952000                                                  | binary-mirror-952000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| addons  | enable dashboard -p                                                      | addons-850000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | addons-850000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-850000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | addons-850000                                                            |                      |         |         |                     |                     |
| start   | -p addons-850000 --wait=true                                             | addons-850000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-850000                                                         | addons-850000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| start   | -p nospam-251000 -n=1 --memory=2250 --wait=false                         | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-251000                                                         | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| start   | -p functional-174000                                                     | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-174000                                                     | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-174000 cache add                                              | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-174000 cache add                                              | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-174000 cache add                                              | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-174000 cache add                                              | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | minikube-local-cache-test:functional-174000                              |                      |         |         |                     |                     |
| cache   | functional-174000 cache delete                                           | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | minikube-local-cache-test:functional-174000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| ssh     | functional-174000 ssh sudo                                               | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-174000                                                        | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-174000 ssh                                                    | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-174000 cache reload                                           | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| ssh     | functional-174000 ssh                                                    | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-174000 kubectl --                                             | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | --context functional-174000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-174000                                                     | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:23 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/12/09 03:23:00
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1209 03:23:00.040324    8124 out.go:345] Setting OutFile to fd 1 ...
I1209 03:23:00.040471    8124 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:00.040473    8124 out.go:358] Setting ErrFile to fd 2...
I1209 03:23:00.040475    8124 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:00.040594    8124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
I1209 03:23:00.041772    8124 out.go:352] Setting JSON to false
I1209 03:23:00.059173    8124 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4951,"bootTime":1733738429,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1209 03:23:00.059275    8124 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1209 03:23:00.064406    8124 out.go:177] * [functional-174000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1209 03:23:00.073346    8124 out.go:177]   - MINIKUBE_LOCATION=20068
I1209 03:23:00.073377    8124 notify.go:220] Checking for updates...
I1209 03:23:00.081317    8124 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
I1209 03:23:00.084373    8124 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1209 03:23:00.087418    8124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1209 03:23:00.090344    8124 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
I1209 03:23:00.093375    8124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1209 03:23:00.096596    8124 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 03:23:00.096650    8124 driver.go:394] Setting default libvirt URI to qemu:///system
I1209 03:23:00.101296    8124 out.go:177] * Using the qemu2 driver based on existing profile
I1209 03:23:00.108237    8124 start.go:297] selected driver: qemu2
I1209 03:23:00.108241    8124 start.go:901] validating driver "qemu2" against &{Name:functional-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 03:23:00.108279    8124 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1209 03:23:00.110818    8124 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1209 03:23:00.110839    8124 cni.go:84] Creating CNI manager for ""
I1209 03:23:00.110862    8124 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1209 03:23:00.110908    8124 start.go:340] cluster config:
{Name:functional-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-174000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 03:23:00.115614    8124 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 03:23:00.123203    8124 out.go:177] * Starting "functional-174000" primary control-plane node in "functional-174000" cluster
I1209 03:23:00.127313    8124 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1209 03:23:00.127328    8124 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1209 03:23:00.127339    8124 cache.go:56] Caching tarball of preloaded images
I1209 03:23:00.127416    8124 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1209 03:23:00.127421    8124 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1209 03:23:00.127475    8124 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/functional-174000/config.json ...
I1209 03:23:00.127960    8124 start.go:360] acquireMachinesLock for functional-174000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1209 03:23:00.128010    8124 start.go:364] duration metric: took 45.334µs to acquireMachinesLock for "functional-174000"
I1209 03:23:00.128018    8124 start.go:96] Skipping create...Using existing machine configuration
I1209 03:23:00.128021    8124 fix.go:54] fixHost starting: 
I1209 03:23:00.128140    8124 fix.go:112] recreateIfNeeded on functional-174000: state=Stopped err=<nil>
W1209 03:23:00.128147    8124 fix.go:138] unexpected machine state, will restart: <nil>
I1209 03:23:00.134324    8124 out.go:177] * Restarting existing qemu2 VM for "functional-174000" ...
I1209 03:23:00.138294    8124 qemu.go:418] Using hvf for hardware acceleration
I1209 03:23:00.138326    8124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:b5:19:69:a0:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/disk.qcow2
I1209 03:23:00.140546    8124 main.go:141] libmachine: STDOUT: 
I1209 03:23:00.140561    8124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1209 03:23:00.140593    8124 fix.go:56] duration metric: took 12.5715ms for fixHost
I1209 03:23:00.140596    8124 start.go:83] releasing machines lock for "functional-174000", held for 12.582709ms
W1209 03:23:00.140600    8124 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1209 03:23:00.140642    8124 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1209 03:23:00.140648    8124 start.go:729] Will try again in 5 seconds ...
I1209 03:23:05.142842    8124 start.go:360] acquireMachinesLock for functional-174000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1209 03:23:05.143380    8124 start.go:364] duration metric: took 445.75µs to acquireMachinesLock for "functional-174000"
I1209 03:23:05.143587    8124 start.go:96] Skipping create...Using existing machine configuration
I1209 03:23:05.143599    8124 fix.go:54] fixHost starting: 
I1209 03:23:05.144300    8124 fix.go:112] recreateIfNeeded on functional-174000: state=Stopped err=<nil>
W1209 03:23:05.144318    8124 fix.go:138] unexpected machine state, will restart: <nil>
I1209 03:23:05.147857    8124 out.go:177] * Restarting existing qemu2 VM for "functional-174000" ...
I1209 03:23:05.154857    8124 qemu.go:418] Using hvf for hardware acceleration
I1209 03:23:05.155049    8124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:b5:19:69:a0:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/disk.qcow2
I1209 03:23:05.165477    8124 main.go:141] libmachine: STDOUT: 
I1209 03:23:05.165530    8124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1209 03:23:05.165609    8124 fix.go:56] duration metric: took 22.013042ms for fixHost
I1209 03:23:05.165621    8124 start.go:83] releasing machines lock for "functional-174000", held for 22.2165ms
W1209 03:23:05.165785    8124 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-174000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1209 03:23:05.172786    8124 out.go:201] 
W1209 03:23:05.175798    8124 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1209 03:23:05.175816    8124 out.go:270] * 
W1209 03:23:05.177926    8124 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1209 03:23:05.186811    8124 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1260358810/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-118000 | jenkins | v1.34.0 | 09 Dec 24 03:21 PST |                     |
|         | -p download-only-118000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:21 PST | 09 Dec 24 03:21 PST |
| delete  | -p download-only-118000                                                  | download-only-118000 | jenkins | v1.34.0 | 09 Dec 24 03:21 PST | 09 Dec 24 03:21 PST |
| start   | -o=json --download-only                                                  | download-only-912000 | jenkins | v1.34.0 | 09 Dec 24 03:21 PST |                     |
|         | -p download-only-912000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| delete  | -p download-only-912000                                                  | download-only-912000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| delete  | -p download-only-118000                                                  | download-only-118000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| delete  | -p download-only-912000                                                  | download-only-912000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| start   | --download-only -p                                                       | binary-mirror-952000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | binary-mirror-952000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:60309                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-952000                                                  | binary-mirror-952000 | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| addons  | enable dashboard -p                                                      | addons-850000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | addons-850000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-850000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | addons-850000                                                            |                      |         |         |                     |                     |
| start   | -p addons-850000 --wait=true                                             | addons-850000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-850000                                                         | addons-850000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| start   | -p nospam-251000 -n=1 --memory=2250 --wait=false                         | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-251000 --log_dir                                                  | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-251000                                                         | nospam-251000        | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| start   | -p functional-174000                                                     | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-174000                                                     | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-174000 cache add                                              | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-174000 cache add                                              | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-174000 cache add                                              | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-174000 cache add                                              | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | minikube-local-cache-test:functional-174000                              |                      |         |         |                     |                     |
| cache   | functional-174000 cache delete                                           | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | minikube-local-cache-test:functional-174000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| ssh     | functional-174000 ssh sudo                                               | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-174000                                                        | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-174000 ssh                                                    | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-174000 cache reload                                           | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
| ssh     | functional-174000 ssh                                                    | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:22 PST | 09 Dec 24 03:22 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-174000 kubectl --                                             | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:22 PST |                     |
|         | --context functional-174000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-174000                                                     | functional-174000    | jenkins | v1.34.0 | 09 Dec 24 03:23 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/12/09 03:23:00
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1209 03:23:00.040324    8124 out.go:345] Setting OutFile to fd 1 ...
I1209 03:23:00.040471    8124 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:00.040473    8124 out.go:358] Setting ErrFile to fd 2...
I1209 03:23:00.040475    8124 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:00.040594    8124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
I1209 03:23:00.041772    8124 out.go:352] Setting JSON to false
I1209 03:23:00.059173    8124 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4951,"bootTime":1733738429,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1209 03:23:00.059275    8124 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1209 03:23:00.064406    8124 out.go:177] * [functional-174000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1209 03:23:00.073346    8124 out.go:177]   - MINIKUBE_LOCATION=20068
I1209 03:23:00.073377    8124 notify.go:220] Checking for updates...
I1209 03:23:00.081317    8124 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
I1209 03:23:00.084373    8124 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1209 03:23:00.087418    8124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1209 03:23:00.090344    8124 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
I1209 03:23:00.093375    8124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1209 03:23:00.096596    8124 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 03:23:00.096650    8124 driver.go:394] Setting default libvirt URI to qemu:///system
I1209 03:23:00.101296    8124 out.go:177] * Using the qemu2 driver based on existing profile
I1209 03:23:00.108237    8124 start.go:297] selected driver: qemu2
I1209 03:23:00.108241    8124 start.go:901] validating driver "qemu2" against &{Name:functional-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 03:23:00.108279    8124 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1209 03:23:00.110818    8124 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1209 03:23:00.110839    8124 cni.go:84] Creating CNI manager for ""
I1209 03:23:00.110862    8124 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1209 03:23:00.110908    8124 start.go:340] cluster config:
{Name:functional-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-174000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 03:23:00.115614    8124 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 03:23:00.123203    8124 out.go:177] * Starting "functional-174000" primary control-plane node in "functional-174000" cluster
I1209 03:23:00.127313    8124 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1209 03:23:00.127328    8124 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1209 03:23:00.127339    8124 cache.go:56] Caching tarball of preloaded images
I1209 03:23:00.127416    8124 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1209 03:23:00.127421    8124 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1209 03:23:00.127475    8124 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/functional-174000/config.json ...
I1209 03:23:00.127960    8124 start.go:360] acquireMachinesLock for functional-174000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1209 03:23:00.128010    8124 start.go:364] duration metric: took 45.334µs to acquireMachinesLock for "functional-174000"
I1209 03:23:00.128018    8124 start.go:96] Skipping create...Using existing machine configuration
I1209 03:23:00.128021    8124 fix.go:54] fixHost starting: 
I1209 03:23:00.128140    8124 fix.go:112] recreateIfNeeded on functional-174000: state=Stopped err=<nil>
W1209 03:23:00.128147    8124 fix.go:138] unexpected machine state, will restart: <nil>
I1209 03:23:00.134324    8124 out.go:177] * Restarting existing qemu2 VM for "functional-174000" ...
I1209 03:23:00.138294    8124 qemu.go:418] Using hvf for hardware acceleration
I1209 03:23:00.138326    8124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:b5:19:69:a0:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/disk.qcow2
I1209 03:23:00.140546    8124 main.go:141] libmachine: STDOUT: 
I1209 03:23:00.140561    8124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1209 03:23:00.140593    8124 fix.go:56] duration metric: took 12.5715ms for fixHost
I1209 03:23:00.140596    8124 start.go:83] releasing machines lock for "functional-174000", held for 12.582709ms
W1209 03:23:00.140600    8124 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1209 03:23:00.140642    8124 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1209 03:23:00.140648    8124 start.go:729] Will try again in 5 seconds ...
I1209 03:23:05.142842    8124 start.go:360] acquireMachinesLock for functional-174000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1209 03:23:05.143380    8124 start.go:364] duration metric: took 445.75µs to acquireMachinesLock for "functional-174000"
I1209 03:23:05.143587    8124 start.go:96] Skipping create...Using existing machine configuration
I1209 03:23:05.143599    8124 fix.go:54] fixHost starting: 
I1209 03:23:05.144300    8124 fix.go:112] recreateIfNeeded on functional-174000: state=Stopped err=<nil>
W1209 03:23:05.144318    8124 fix.go:138] unexpected machine state, will restart: <nil>
I1209 03:23:05.147857    8124 out.go:177] * Restarting existing qemu2 VM for "functional-174000" ...
I1209 03:23:05.154857    8124 qemu.go:418] Using hvf for hardware acceleration
I1209 03:23:05.155049    8124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:b5:19:69:a0:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/functional-174000/disk.qcow2
I1209 03:23:05.165477    8124 main.go:141] libmachine: STDOUT: 
I1209 03:23:05.165530    8124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1209 03:23:05.165609    8124 fix.go:56] duration metric: took 22.013042ms for fixHost
I1209 03:23:05.165621    8124 start.go:83] releasing machines lock for "functional-174000", held for 22.2165ms
W1209 03:23:05.165785    8124 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-174000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1209 03:23:05.172786    8124 out.go:201] 
W1209 03:23:05.175798    8124 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1209 03:23:05.175816    8124 out.go:270] * 
W1209 03:23:05.177926    8124 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1209 03:23:05.186811    8124 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-174000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-174000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.853291ms)

                                                
                                                
** stderr ** 
	error: context "functional-174000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-174000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-174000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-174000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-174000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-174000 --alsologtostderr -v=1] stderr:
I1209 03:23:45.123643    8429 out.go:345] Setting OutFile to fd 1 ...
I1209 03:23:45.124080    8429 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:45.124084    8429 out.go:358] Setting ErrFile to fd 2...
I1209 03:23:45.124086    8429 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:45.124237    8429 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
I1209 03:23:45.124462    8429 mustload.go:65] Loading cluster: functional-174000
I1209 03:23:45.124680    8429 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 03:23:45.129167    8429 out.go:177] * The control-plane node functional-174000 host is not running: state=Stopped
I1209 03:23:45.133130    8429 out.go:177]   To start a cluster, run: "minikube start -p functional-174000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (46.254625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 status: exit status 7 (34.9685ms)

                                                
                                                
-- stdout --
	functional-174000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-174000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (34.429375ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-174000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 status -o json: exit status 7 (34.487541ms)

                                                
                                                
-- stdout --
	{"Name":"functional-174000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-174000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (34.122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-174000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-174000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (27.338667ms)

                                                
                                                
** stderr ** 
	error: context "functional-174000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-174000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-174000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-174000 describe po hello-node-connect: exit status 1 (26.657333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-174000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-174000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-174000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-174000 logs -l app=hello-node-connect: exit status 1 (26.538291ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-174000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-174000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-174000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-174000 describe svc hello-node-connect: exit status 1 (26.601666ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-174000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-174000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (34.636625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-174000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (35.060292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "echo hello": exit status 83 (45.497792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-174000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-174000\"\n"*. args "out/minikube-darwin-arm64 -p functional-174000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "cat /etc/hostname": exit status 83 (46.80175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-174000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-174000"- but got *"* The control-plane node functional-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-174000\"\n"*. args "out/minikube-darwin-arm64 -p functional-174000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (34.720542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (56.730959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-174000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh -n functional-174000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh -n functional-174000 "sudo cat /home/docker/cp-test.txt": exit status 83 (46.903833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-174000 ssh -n functional-174000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-174000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-174000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cp functional-174000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2489169134/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 cp functional-174000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2489169134/001/cp-test.txt: exit status 83 (46.529583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-174000 cp functional-174000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2489169134/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh -n functional-174000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh -n functional-174000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.809042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-174000 ssh -n functional-174000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2489169134/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-174000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (53.699417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-174000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh -n functional-174000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh -n functional-174000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (45.845042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-174000 ssh -n functional-174000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-174000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-174000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7820/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/test/nested/copy/7820/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/test/nested/copy/7820/hosts": exit status 83 (47.426166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/test/nested/copy/7820/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-174000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-174000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (34.238583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7820.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/ssl/certs/7820.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/ssl/certs/7820.pem": exit status 83 (48.802208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/7820.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-174000 ssh \"sudo cat /etc/ssl/certs/7820.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7820.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-174000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-174000"
  	"""
  )
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7820.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /usr/share/ca-certificates/7820.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /usr/share/ca-certificates/7820.pem": exit status 83 (49.666792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/7820.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-174000 ssh \"sudo cat /usr/share/ca-certificates/7820.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7820.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-174000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-174000"
  	"""
  )
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (49.522333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-174000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-174000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-174000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/78202.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/ssl/certs/78202.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/ssl/certs/78202.pem": exit status 83 (44.722209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/78202.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-174000 ssh \"sudo cat /etc/ssl/certs/78202.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/78202.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-174000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-174000"
  	"""
  )
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/78202.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /usr/share/ca-certificates/78202.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /usr/share/ca-certificates/78202.pem": exit status 83 (45.661958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/78202.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-174000 ssh \"sudo cat /usr/share/ca-certificates/78202.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/78202.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-174000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-174000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (55.137416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-174000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-174000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-174000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (35.633167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-174000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-174000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.85475ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-174000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-174000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-174000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-174000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-174000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-174000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-174000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-174000 -n functional-174000: exit status 7 (36.180875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-174000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "sudo systemctl is-active crio": exit status 83 (48.492583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-174000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 version -o=json --components: exit status 83 (44.920125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-174000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-174000 image ls --format short --alsologtostderr:
I1209 03:23:45.559518    8444 out.go:345] Setting OutFile to fd 1 ...
I1209 03:23:45.559726    8444 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:45.559729    8444 out.go:358] Setting ErrFile to fd 2...
I1209 03:23:45.559731    8444 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:45.559868    8444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
I1209 03:23:45.560320    8444 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 03:23:45.560379    8444 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-174000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-174000 image ls --format table --alsologtostderr:
I1209 03:23:45.678273    8450 out.go:345] Setting OutFile to fd 1 ...
I1209 03:23:45.678450    8450 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:45.678453    8450 out.go:358] Setting ErrFile to fd 2...
I1209 03:23:45.678456    8450 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:45.678589    8450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
I1209 03:23:45.678991    8450 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 03:23:45.679060    8450 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-174000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-174000 image ls --format json --alsologtostderr:
I1209 03:23:45.598861    8446 out.go:345] Setting OutFile to fd 1 ...
I1209 03:23:45.599322    8446 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:45.599327    8446 out.go:358] Setting ErrFile to fd 2...
I1209 03:23:45.599329    8446 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:45.599518    8446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
I1209 03:23:45.600253    8446 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 03:23:45.600332    8446 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-174000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-174000 image ls --format yaml --alsologtostderr:
I1209 03:23:45.639836    8448 out.go:345] Setting OutFile to fd 1 ...
I1209 03:23:45.640006    8448 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:45.640009    8448 out.go:358] Setting ErrFile to fd 2...
I1209 03:23:45.640011    8448 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:45.640140    8448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
I1209 03:23:45.640567    8448 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 03:23:45.640885    8448 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh pgrep buildkitd: exit status 83 (47.640125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image build -t localhost/my-image:functional-174000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-174000 image build -t localhost/my-image:functional-174000 testdata/build --alsologtostderr:
I1209 03:23:45.765797    8454 out.go:345] Setting OutFile to fd 1 ...
I1209 03:23:45.766260    8454 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:45.766264    8454 out.go:358] Setting ErrFile to fd 2...
I1209 03:23:45.766266    8454 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:45.766389    8454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
I1209 03:23:45.766790    8454 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 03:23:45.767258    8454 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 03:23:45.767502    8454 build_images.go:133] succeeded building to: 
I1209 03:23:45.767506    8454 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls
functional_test.go:446: expected "localhost/my-image:functional-174000" to be loaded into minikube but the image is not there
I1209 03:23:58.954159    7820 retry.go:31] will retry after 18.348634978s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-174000 docker-env) && out/minikube-darwin-arm64 status -p functional-174000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-174000 docker-env) && out/minikube-darwin-arm64 status -p functional-174000": exit status 1 (47.219416ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 update-context --alsologtostderr -v=2: exit status 83 (46.679ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:23:45.418282    8438 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:23:45.418728    8438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:23:45.418732    8438 out.go:358] Setting ErrFile to fd 2...
	I1209 03:23:45.418735    8438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:23:45.418882    8438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:23:45.419129    8438 mustload.go:65] Loading cluster: functional-174000
	I1209 03:23:45.419348    8438 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:23:45.423389    8438 out.go:177] * The control-plane node functional-174000 host is not running: state=Stopped
	I1209 03:23:45.427380    8438 out.go:177]   To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-174000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-174000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 update-context --alsologtostderr -v=2: exit status 83 (47.506084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:23:45.511692    8442 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:23:45.511862    8442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:23:45.511865    8442 out.go:358] Setting ErrFile to fd 2...
	I1209 03:23:45.511867    8442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:23:45.512019    8442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:23:45.512229    8442 mustload.go:65] Loading cluster: functional-174000
	I1209 03:23:45.512436    8442 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:23:45.517405    8442 out.go:177] * The control-plane node functional-174000 host is not running: state=Stopped
	I1209 03:23:45.521365    8442 out.go:177]   To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-174000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-174000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 update-context --alsologtostderr -v=2: exit status 83 (45.711125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:23:45.464679    8440 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:23:45.464862    8440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:23:45.464866    8440 out.go:358] Setting ErrFile to fd 2...
	I1209 03:23:45.464869    8440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:23:45.464996    8440 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:23:45.465237    8440 mustload.go:65] Loading cluster: functional-174000
	I1209 03:23:45.465433    8440 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:23:45.469469    8440 out.go:177] * The control-plane node functional-174000 host is not running: state=Stopped
	I1209 03:23:45.473354    8440 out.go:177]   To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-174000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-174000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-174000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-174000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (27.513ms)

                                                
                                                
** stderr ** 
	error: context "functional-174000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-174000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 service list: exit status 83 (53.496625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-174000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-174000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 service list -o json: exit status 83 (51.750667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-174000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 service --namespace=default --https --url hello-node: exit status 83 (45.8285ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-174000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 service hello-node --url --format={{.IP}}: exit status 83 (46.7025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-174000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-174000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 service hello-node --url: exit status 83 (44.86375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-174000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
functional_test.go:1569: failed to parse "* The control-plane node functional-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-174000\"": parse "* The control-plane node functional-174000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-174000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I1209 03:23:07.132903    8242 out.go:345] Setting OutFile to fd 1 ...
I1209 03:23:07.133082    8242 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:07.133084    8242 out.go:358] Setting ErrFile to fd 2...
I1209 03:23:07.133087    8242 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:23:07.133229    8242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
I1209 03:23:07.133428    8242 mustload.go:65] Loading cluster: functional-174000
I1209 03:23:07.133655    8242 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 03:23:07.138793    8242 out.go:177] * The control-plane node functional-174000 host is not running: state=Stopped
I1209 03:23:07.149809    8242 out.go:177]   To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
stdout: * The control-plane node functional-174000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-174000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 8241: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-174000": client config: context "functional-174000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (70.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1209 03:23:07.219408    7820 retry.go:31] will retry after 2.85093881s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-174000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-174000 get svc nginx-svc: exit status 1 (69.634042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-174000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-174000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (70.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image load --daemon kicbase/echo-server:functional-174000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-174000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image load --daemon kicbase/echo-server:functional-174000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-174000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-174000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image load --daemon kicbase/echo-server:functional-174000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-174000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image save kicbase/echo-server:functional-174000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-174000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1209 03:24:17.391061    7820 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.036250291s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 12 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1209 03:24:42.535931    7820 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 03:24:52.538004    7820 retry.go:31] will retry after 3.119935901s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1209 03:25:05.662079    7820 retry.go:31] will retry after 4.101986482s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:64511->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-488000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-488000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.017509209s)

                                                
                                                
-- stdout --
	* [ha-488000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-488000" primary control-plane node in "ha-488000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-488000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:25:12.945201    8489 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:25:12.945353    8489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:25:12.945356    8489 out.go:358] Setting ErrFile to fd 2...
	I1209 03:25:12.945359    8489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:25:12.945483    8489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:25:12.946592    8489 out.go:352] Setting JSON to false
	I1209 03:25:12.964297    8489 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5083,"bootTime":1733738429,"procs":539,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:25:12.964381    8489 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:25:12.970523    8489 out.go:177] * [ha-488000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:25:12.978427    8489 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:25:12.978465    8489 notify.go:220] Checking for updates...
	I1209 03:25:12.985354    8489 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:25:12.988404    8489 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:25:12.991436    8489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:25:12.992665    8489 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:25:12.995439    8489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:25:12.998668    8489 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:25:13.002279    8489 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:25:13.009477    8489 start.go:297] selected driver: qemu2
	I1209 03:25:13.009485    8489 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:25:13.009493    8489 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:25:13.012084    8489 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:25:13.016307    8489 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:25:13.019519    8489 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:25:13.019537    8489 cni.go:84] Creating CNI manager for ""
	I1209 03:25:13.019556    8489 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1209 03:25:13.019561    8489 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 03:25:13.019608    8489 start.go:340] cluster config:
	{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:25:13.024474    8489 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:25:13.032361    8489 out.go:177] * Starting "ha-488000" primary control-plane node in "ha-488000" cluster
	I1209 03:25:13.036474    8489 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:25:13.036493    8489 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:25:13.036504    8489 cache.go:56] Caching tarball of preloaded images
	I1209 03:25:13.036609    8489 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:25:13.036615    8489 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:25:13.036835    8489 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/ha-488000/config.json ...
	I1209 03:25:13.036848    8489 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/ha-488000/config.json: {Name:mkdf63fb841f73d422d864976b40364f2130f0b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:25:13.037180    8489 start.go:360] acquireMachinesLock for ha-488000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:25:13.037231    8489 start.go:364] duration metric: took 45.041µs to acquireMachinesLock for "ha-488000"
	I1209 03:25:13.037243    8489 start.go:93] Provisioning new machine with config: &{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:25:13.037276    8489 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:25:13.044431    8489 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:25:13.062268    8489 start.go:159] libmachine.API.Create for "ha-488000" (driver="qemu2")
	I1209 03:25:13.062303    8489 client.go:168] LocalClient.Create starting
	I1209 03:25:13.062376    8489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:25:13.062415    8489 main.go:141] libmachine: Decoding PEM data...
	I1209 03:25:13.062429    8489 main.go:141] libmachine: Parsing certificate...
	I1209 03:25:13.062470    8489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:25:13.062501    8489 main.go:141] libmachine: Decoding PEM data...
	I1209 03:25:13.062510    8489 main.go:141] libmachine: Parsing certificate...
	I1209 03:25:13.063111    8489 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:25:13.225636    8489 main.go:141] libmachine: Creating SSH key...
	I1209 03:25:13.422449    8489 main.go:141] libmachine: Creating Disk image...
	I1209 03:25:13.422456    8489 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:25:13.422718    8489 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2
	I1209 03:25:13.433036    8489 main.go:141] libmachine: STDOUT: 
	I1209 03:25:13.433057    8489 main.go:141] libmachine: STDERR: 
	I1209 03:25:13.433122    8489 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2 +20000M
	I1209 03:25:13.441753    8489 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:25:13.441775    8489 main.go:141] libmachine: STDERR: 
	I1209 03:25:13.441792    8489 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2
	I1209 03:25:13.441797    8489 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:25:13.441808    8489 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:25:13.441850    8489 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:2f:49:28:9b:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2
	I1209 03:25:13.443652    8489 main.go:141] libmachine: STDOUT: 
	I1209 03:25:13.443665    8489 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:25:13.443687    8489 client.go:171] duration metric: took 381.388292ms to LocalClient.Create
	I1209 03:25:15.445863    8489 start.go:128] duration metric: took 2.408628375s to createHost
	I1209 03:25:15.445920    8489 start.go:83] releasing machines lock for "ha-488000", held for 2.408742875s
	W1209 03:25:15.445980    8489 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:25:15.463210    8489 out.go:177] * Deleting "ha-488000" in qemu2 ...
	W1209 03:25:15.499133    8489 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:25:15.499170    8489 start.go:729] Will try again in 5 seconds ...
	I1209 03:25:20.500798    8489 start.go:360] acquireMachinesLock for ha-488000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:25:20.501541    8489 start.go:364] duration metric: took 576µs to acquireMachinesLock for "ha-488000"
	I1209 03:25:20.501665    8489 start.go:93] Provisioning new machine with config: &{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:25:20.501949    8489 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:25:20.520833    8489 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:25:20.572486    8489 start.go:159] libmachine.API.Create for "ha-488000" (driver="qemu2")
	I1209 03:25:20.572533    8489 client.go:168] LocalClient.Create starting
	I1209 03:25:20.572691    8489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:25:20.572786    8489 main.go:141] libmachine: Decoding PEM data...
	I1209 03:25:20.572808    8489 main.go:141] libmachine: Parsing certificate...
	I1209 03:25:20.572882    8489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:25:20.572949    8489 main.go:141] libmachine: Decoding PEM data...
	I1209 03:25:20.572962    8489 main.go:141] libmachine: Parsing certificate...
	I1209 03:25:20.573808    8489 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:25:20.749015    8489 main.go:141] libmachine: Creating SSH key...
	I1209 03:25:20.859609    8489 main.go:141] libmachine: Creating Disk image...
	I1209 03:25:20.859617    8489 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:25:20.859852    8489 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2
	I1209 03:25:20.869876    8489 main.go:141] libmachine: STDOUT: 
	I1209 03:25:20.869896    8489 main.go:141] libmachine: STDERR: 
	I1209 03:25:20.869948    8489 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2 +20000M
	I1209 03:25:20.878315    8489 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:25:20.878331    8489 main.go:141] libmachine: STDERR: 
	I1209 03:25:20.878342    8489 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2
	I1209 03:25:20.878347    8489 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:25:20.878355    8489 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:25:20.878393    8489 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:4a:14:28:1d:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2
	I1209 03:25:20.880155    8489 main.go:141] libmachine: STDOUT: 
	I1209 03:25:20.880173    8489 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:25:20.880186    8489 client.go:171] duration metric: took 307.656792ms to LocalClient.Create
	I1209 03:25:22.882320    8489 start.go:128] duration metric: took 2.3804055s to createHost
	I1209 03:25:22.882376    8489 start.go:83] releasing machines lock for "ha-488000", held for 2.380867s
	W1209 03:25:22.882784    8489 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-488000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-488000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:25:22.898428    8489 out.go:201] 
	W1209 03:25:22.903600    8489 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:25:22.903643    8489 out.go:270] * 
	* 
	W1209 03:25:22.906131    8489 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:25:22.915402    8489 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-488000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (75.607709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (104.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (65.227542ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-488000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- rollout status deployment/busybox: exit status 1 (62.1825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.685ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:25:23.197475    7820 retry.go:31] will retry after 784.260443ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.682792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:25:24.092781    7820 retry.go:31] will retry after 1.67711686s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.418583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:25:25.881623    7820 retry.go:31] will retry after 3.291862s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.227208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:25:29.284049    7820 retry.go:31] will retry after 3.805109385s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.825167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:25:33.201211    7820 retry.go:31] will retry after 3.240353461s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.087584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:25:36.551957    7820 retry.go:31] will retry after 6.511453489s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.356083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:25:43.175924    7820 retry.go:31] will retry after 11.601516178s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.250417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:25:54.887890    7820 retry.go:31] will retry after 10.876192308s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.741292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:26:05.875188    7820 retry.go:31] will retry after 24.480831215s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.717042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:26:30.466569    7820 retry.go:31] will retry after 36.52517152s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.147292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.363084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.250792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.543125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.853125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (34.6065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (104.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-488000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.382583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-488000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (34.846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-488000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-488000 -v=7 --alsologtostderr: exit status 83 (48.106416ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-488000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-488000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:07.514345    8586 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:07.514745    8586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:07.514754    8586 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:07.514756    8586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:07.514932    8586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:07.515161    8586 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:07.515379    8586 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:07.520434    8586 out.go:177] * The control-plane node ha-488000 host is not running: state=Stopped
	I1209 03:27:07.525370    8586 out.go:177]   To start a cluster, run: "minikube start -p ha-488000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-488000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (35.090208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-488000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-488000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.907792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-488000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-488000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-488000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (34.175292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-488000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-488000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (34.128208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status --output json -v=7 --alsologtostderr: exit status 7 (34.287083ms)

                                                
                                                
-- stdout --
	{"Name":"ha-488000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:07.747731    8598 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:07.747926    8598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:07.747929    8598 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:07.747932    8598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:07.748058    8598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:07.748170    8598 out.go:352] Setting JSON to true
	I1209 03:27:07.748180    8598 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:07.748235    8598 notify.go:220] Checking for updates...
	I1209 03:27:07.748380    8598 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:07.748387    8598 status.go:174] checking status of ha-488000 ...
	I1209 03:27:07.748628    8598 status.go:371] ha-488000 host status = "Stopped" (err=<nil>)
	I1209 03:27:07.748631    8598 status.go:384] host is not running, skipping remaining checks
	I1209 03:27:07.748634    8598 status.go:176] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:335: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-488000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (34.511166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 node stop m02 -v=7 --alsologtostderr: exit status 85 (49.801041ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:07.817790    8602 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:07.818219    8602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:07.818222    8602 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:07.818225    8602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:07.818397    8602 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:07.818636    8602 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:07.818836    8602 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:07.822717    8602 out.go:201] 
	W1209 03:27:07.825708    8602 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1209 03:27:07.825712    8602 out.go:270] * 
	* 
	W1209 03:27:07.827575    8602 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:27:07.830795    8602 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-488000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (34.193375ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:07.867080    8604 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:07.867269    8604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:07.867272    8604 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:07.867274    8604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:07.867405    8604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:07.867531    8604 out.go:352] Setting JSON to false
	I1209 03:27:07.867542    8604 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:07.867611    8604 notify.go:220] Checking for updates...
	I1209 03:27:07.867760    8604 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:07.867768    8604 status.go:174] checking status of ha-488000 ...
	I1209 03:27:07.868040    8604 status.go:371] ha-488000 host status = "Stopped" (err=<nil>)
	I1209 03:27:07.868044    8604 status.go:384] host is not running, skipping remaining checks
	I1209 03:27:07.868046    8604 status.go:176] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (34.696834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-488000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (33.368334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 node start m02 -v=7 --alsologtostderr: exit status 85 (51.293667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:08.023932    8614 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:08.024408    8614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:08.024411    8614 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:08.024414    8614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:08.024572    8614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:08.024853    8614 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:08.025075    8614 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:08.029722    8614 out.go:201] 
	W1209 03:27:08.030986    8614 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1209 03:27:08.030991    8614 out.go:270] * 
	* 
	W1209 03:27:08.032763    8614 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:27:08.036758    8614 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1209 03:27:08.023932    8614 out.go:345] Setting OutFile to fd 1 ...
I1209 03:27:08.024408    8614 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:27:08.024411    8614 out.go:358] Setting ErrFile to fd 2...
I1209 03:27:08.024414    8614 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:27:08.024572    8614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
I1209 03:27:08.024853    8614 mustload.go:65] Loading cluster: ha-488000
I1209 03:27:08.025075    8614 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 03:27:08.029722    8614 out.go:201] 
W1209 03:27:08.030986    8614 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1209 03:27:08.030991    8614 out.go:270] * 
* 
W1209 03:27:08.032763    8614 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1209 03:27:08.036758    8614 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-488000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (34.284083ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:08.074247    8617 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:08.074422    8617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:08.074425    8617 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:08.074428    8617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:08.074590    8617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:08.074707    8617 out.go:352] Setting JSON to false
	I1209 03:27:08.074717    8617 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:08.074777    8617 notify.go:220] Checking for updates...
	I1209 03:27:08.074947    8617 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:08.074955    8617 status.go:174] checking status of ha-488000 ...
	I1209 03:27:08.075189    8617 status.go:371] ha-488000 host status = "Stopped" (err=<nil>)
	I1209 03:27:08.075193    8617 status.go:384] host is not running, skipping remaining checks
	I1209 03:27:08.075195    8617 status.go:176] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:27:08.076119    7820 retry.go:31] will retry after 1.123287113s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (77.990917ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:09.277709    8619 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:09.277918    8619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:09.277922    8619 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:09.277925    8619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:09.278079    8619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:09.278232    8619 out.go:352] Setting JSON to false
	I1209 03:27:09.278244    8619 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:09.278291    8619 notify.go:220] Checking for updates...
	I1209 03:27:09.278494    8619 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:09.278503    8619 status.go:174] checking status of ha-488000 ...
	I1209 03:27:09.278796    8619 status.go:371] ha-488000 host status = "Stopped" (err=<nil>)
	I1209 03:27:09.278801    8619 status.go:384] host is not running, skipping remaining checks
	I1209 03:27:09.278803    8619 status.go:176] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:27:09.279835    7820 retry.go:31] will retry after 1.772414504s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (81.192667ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:11.133553    8621 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:11.133780    8621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:11.133784    8621 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:11.133788    8621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:11.133963    8621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:11.134125    8621 out.go:352] Setting JSON to false
	I1209 03:27:11.134137    8621 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:11.134181    8621 notify.go:220] Checking for updates...
	I1209 03:27:11.134382    8621 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:11.134391    8621 status.go:174] checking status of ha-488000 ...
	I1209 03:27:11.134705    8621 status.go:371] ha-488000 host status = "Stopped" (err=<nil>)
	I1209 03:27:11.134710    8621 status.go:384] host is not running, skipping remaining checks
	I1209 03:27:11.134712    8621 status.go:176] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:27:11.135783    7820 retry.go:31] will retry after 2.65378751s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (78.895417ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:13.868447    8623 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:13.868686    8623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:13.868691    8623 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:13.868694    8623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:13.868889    8623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:13.869077    8623 out.go:352] Setting JSON to false
	I1209 03:27:13.869091    8623 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:13.869131    8623 notify.go:220] Checking for updates...
	I1209 03:27:13.869363    8623 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:13.869372    8623 status.go:174] checking status of ha-488000 ...
	I1209 03:27:13.869684    8623 status.go:371] ha-488000 host status = "Stopped" (err=<nil>)
	I1209 03:27:13.869689    8623 status.go:384] host is not running, skipping remaining checks
	I1209 03:27:13.869691    8623 status.go:176] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:27:13.870766    7820 retry.go:31] will retry after 2.73206607s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (79.648625ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:16.681921    8625 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:16.682143    8625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:16.682147    8625 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:16.682150    8625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:16.682309    8625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:16.682450    8625 out.go:352] Setting JSON to false
	I1209 03:27:16.682462    8625 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:16.682498    8625 notify.go:220] Checking for updates...
	I1209 03:27:16.682722    8625 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:16.682732    8625 status.go:174] checking status of ha-488000 ...
	I1209 03:27:16.683044    8625 status.go:371] ha-488000 host status = "Stopped" (err=<nil>)
	I1209 03:27:16.683048    8625 status.go:384] host is not running, skipping remaining checks
	I1209 03:27:16.683051    8625 status.go:176] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:27:16.684075    7820 retry.go:31] will retry after 5.54830404s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (79.330666ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:22.311934    8630 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:22.312169    8630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:22.312173    8630 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:22.312176    8630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:22.312329    8630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:22.312475    8630 out.go:352] Setting JSON to false
	I1209 03:27:22.312486    8630 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:22.312521    8630 notify.go:220] Checking for updates...
	I1209 03:27:22.312732    8630 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:22.312741    8630 status.go:174] checking status of ha-488000 ...
	I1209 03:27:22.313047    8630 status.go:371] ha-488000 host status = "Stopped" (err=<nil>)
	I1209 03:27:22.313052    8630 status.go:384] host is not running, skipping remaining checks
	I1209 03:27:22.313054    8630 status.go:176] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:27:22.314079    7820 retry.go:31] will retry after 8.470931146s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (78.882375ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:30.863947    8634 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:30.864175    8634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:30.864179    8634 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:30.864182    8634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:30.864363    8634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:30.864528    8634 out.go:352] Setting JSON to false
	I1209 03:27:30.864541    8634 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:30.864589    8634 notify.go:220] Checking for updates...
	I1209 03:27:30.864808    8634 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:30.864817    8634 status.go:174] checking status of ha-488000 ...
	I1209 03:27:30.865086    8634 status.go:371] ha-488000 host status = "Stopped" (err=<nil>)
	I1209 03:27:30.865090    8634 status.go:384] host is not running, skipping remaining checks
	I1209 03:27:30.865092    8634 status.go:176] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:27:30.866127    7820 retry.go:31] will retry after 7.709228055s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (78.222334ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:38.653686    8639 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:38.653907    8639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:38.653911    8639 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:38.653914    8639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:38.654077    8639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:38.654227    8639 out.go:352] Setting JSON to false
	I1209 03:27:38.654243    8639 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:38.654282    8639 notify.go:220] Checking for updates...
	I1209 03:27:38.654479    8639 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:38.654487    8639 status.go:174] checking status of ha-488000 ...
	I1209 03:27:38.654780    8639 status.go:371] ha-488000 host status = "Stopped" (err=<nil>)
	I1209 03:27:38.654784    8639 status.go:384] host is not running, skipping remaining checks
	I1209 03:27:38.654786    8639 status.go:176] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:27:38.655827    7820 retry.go:31] will retry after 11.541148428s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (82.214791ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:50.278983    8641 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:50.279230    8641 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:50.279234    8641 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:50.279237    8641 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:50.279412    8641 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:50.279576    8641 out.go:352] Setting JSON to false
	I1209 03:27:50.279587    8641 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:50.279626    8641 notify.go:220] Checking for updates...
	I1209 03:27:50.279844    8641 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:50.279853    8641 status.go:174] checking status of ha-488000 ...
	I1209 03:27:50.280165    8641 status.go:371] ha-488000 host status = "Stopped" (err=<nil>)
	I1209 03:27:50.280169    8641 status.go:384] host is not running, skipping remaining checks
	I1209 03:27:50.280172    8641 status.go:176] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (36.402042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (42.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-488000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-488000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (34.725834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-488000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-488000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-488000 -v=7 --alsologtostderr: (3.755909042s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-488000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-488000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.229742583s)

                                                
                                                
-- stdout --
	* [ha-488000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-488000" primary control-plane node in "ha-488000" cluster
	* Restarting existing qemu2 VM for "ha-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:54.268662    8670 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:54.268865    8670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:54.268869    8670 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:54.268872    8670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:54.269055    8670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:54.270289    8670 out.go:352] Setting JSON to false
	I1209 03:27:54.290147    8670 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5245,"bootTime":1733738429,"procs":539,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:27:54.290243    8670 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:27:54.294323    8670 out.go:177] * [ha-488000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:27:54.300260    8670 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:27:54.300319    8670 notify.go:220] Checking for updates...
	I1209 03:27:54.307287    8670 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:27:54.310217    8670 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:27:54.314222    8670 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:27:54.317266    8670 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:27:54.320237    8670 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:27:54.323546    8670 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:54.323600    8670 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:27:54.328253    8670 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:27:54.335238    8670 start.go:297] selected driver: qemu2
	I1209 03:27:54.335244    8670 start.go:901] validating driver "qemu2" against &{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:27:54.335292    8670 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:27:54.337857    8670 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:27:54.337886    8670 cni.go:84] Creating CNI manager for ""
	I1209 03:27:54.337909    8670 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 03:27:54.337968    8670 start.go:340] cluster config:
	{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:27:54.342656    8670 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:27:54.350216    8670 out.go:177] * Starting "ha-488000" primary control-plane node in "ha-488000" cluster
	I1209 03:27:54.354227    8670 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:27:54.354243    8670 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:27:54.354260    8670 cache.go:56] Caching tarball of preloaded images
	I1209 03:27:54.354340    8670 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:27:54.354346    8670 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:27:54.354400    8670 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/ha-488000/config.json ...
	I1209 03:27:54.354900    8670 start.go:360] acquireMachinesLock for ha-488000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:27:54.354950    8670 start.go:364] duration metric: took 44.291µs to acquireMachinesLock for "ha-488000"
	I1209 03:27:54.354959    8670 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:27:54.354966    8670 fix.go:54] fixHost starting: 
	I1209 03:27:54.355089    8670 fix.go:112] recreateIfNeeded on ha-488000: state=Stopped err=<nil>
	W1209 03:27:54.355097    8670 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:27:54.362260    8670 out.go:177] * Restarting existing qemu2 VM for "ha-488000" ...
	I1209 03:27:54.365178    8670 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:27:54.365220    8670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:4a:14:28:1d:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2
	I1209 03:27:54.367588    8670 main.go:141] libmachine: STDOUT: 
	I1209 03:27:54.367608    8670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:27:54.367640    8670 fix.go:56] duration metric: took 12.675959ms for fixHost
	I1209 03:27:54.367645    8670 start.go:83] releasing machines lock for "ha-488000", held for 12.690583ms
	W1209 03:27:54.367651    8670 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:27:54.367692    8670 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:27:54.367697    8670 start.go:729] Will try again in 5 seconds ...
	I1209 03:27:59.369837    8670 start.go:360] acquireMachinesLock for ha-488000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:27:59.370242    8670 start.go:364] duration metric: took 337.042µs to acquireMachinesLock for "ha-488000"
	I1209 03:27:59.370378    8670 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:27:59.370397    8670 fix.go:54] fixHost starting: 
	I1209 03:27:59.371046    8670 fix.go:112] recreateIfNeeded on ha-488000: state=Stopped err=<nil>
	W1209 03:27:59.371071    8670 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:27:59.379547    8670 out.go:177] * Restarting existing qemu2 VM for "ha-488000" ...
	I1209 03:27:59.383379    8670 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:27:59.383594    8670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:4a:14:28:1d:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2
	I1209 03:27:59.393241    8670 main.go:141] libmachine: STDOUT: 
	I1209 03:27:59.393302    8670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:27:59.393365    8670 fix.go:56] duration metric: took 22.971083ms for fixHost
	I1209 03:27:59.393378    8670 start.go:83] releasing machines lock for "ha-488000", held for 23.116125ms
	W1209 03:27:59.393602    8670 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-488000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-488000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:27:59.400551    8670 out.go:201] 
	W1209 03:27:59.404625    8670 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:27:59.404698    8670 out.go:270] * 
	* 
	W1209 03:27:59.407277    8670 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:27:59.414596    8670 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-488000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-488000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (35.465667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 node delete m03 -v=7 --alsologtostderr: exit status 83 (46.098333ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-488000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-488000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:59.571668    8682 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:59.572146    8682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:59.572150    8682 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:59.572152    8682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:59.572316    8682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:59.572539    8682 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:59.572750    8682 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:59.577284    8682 out.go:177] * The control-plane node ha-488000 host is not running: state=Stopped
	I1209 03:27:59.580156    8682 out.go:177]   To start a cluster, run: "minikube start -p ha-488000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-488000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (34.52825ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:27:59.617094    8684 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:27:59.617275    8684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:59.617278    8684 out.go:358] Setting ErrFile to fd 2...
	I1209 03:27:59.617281    8684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:27:59.617386    8684 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:27:59.617513    8684 out.go:352] Setting JSON to false
	I1209 03:27:59.617523    8684 mustload.go:65] Loading cluster: ha-488000
	I1209 03:27:59.617581    8684 notify.go:220] Checking for updates...
	I1209 03:27:59.617699    8684 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:27:59.617707    8684 status.go:174] checking status of ha-488000 ...
	I1209 03:27:59.617951    8684 status.go:371] ha-488000 host status = "Stopped" (err=<nil>)
	I1209 03:27:59.617955    8684 status.go:384] host is not running, skipping remaining checks
	I1209 03:27:59.617957    8684 status.go:176] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (34.965792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-488000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (34.350375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (4.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-488000 stop -v=7 --alsologtostderr: (3.981024083s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr: exit status 7 (75.63ms)

                                                
                                                
-- stdout --
	ha-488000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:28:03.795613    8718 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:28:03.795832    8718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:28:03.795837    8718 out.go:358] Setting ErrFile to fd 2...
	I1209 03:28:03.795840    8718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:28:03.796012    8718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:28:03.796183    8718 out.go:352] Setting JSON to false
	I1209 03:28:03.796195    8718 mustload.go:65] Loading cluster: ha-488000
	I1209 03:28:03.796229    8718 notify.go:220] Checking for updates...
	I1209 03:28:03.796445    8718 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:28:03.796454    8718 status.go:174] checking status of ha-488000 ...
	I1209 03:28:03.796734    8718 status.go:371] ha-488000 host status = "Stopped" (err=<nil>)
	I1209 03:28:03.796738    8718 status.go:384] host is not running, skipping remaining checks
	I1209 03:28:03.796741    8718 status.go:176] ha-488000 status: &{Name:ha-488000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-488000 status -v=7 --alsologtostderr": ha-488000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (36.018417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (4.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-488000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-488000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.190667458s)

                                                
                                                
-- stdout --
	* [ha-488000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-488000" primary control-plane node in "ha-488000" cluster
	* Restarting existing qemu2 VM for "ha-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:28:03.866435    8722 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:28:03.866591    8722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:28:03.866595    8722 out.go:358] Setting ErrFile to fd 2...
	I1209 03:28:03.866597    8722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:28:03.866721    8722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:28:03.867811    8722 out.go:352] Setting JSON to false
	I1209 03:28:03.885753    8722 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5254,"bootTime":1733738429,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:28:03.885831    8722 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:28:03.890725    8722 out.go:177] * [ha-488000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:28:03.898679    8722 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:28:03.898710    8722 notify.go:220] Checking for updates...
	I1209 03:28:03.905585    8722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:28:03.908633    8722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:28:03.912609    8722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:28:03.915632    8722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:28:03.918638    8722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:28:03.921874    8722 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:28:03.922138    8722 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:28:03.925542    8722 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:28:03.931570    8722 start.go:297] selected driver: qemu2
	I1209 03:28:03.931576    8722 start.go:901] validating driver "qemu2" against &{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:28:03.931625    8722 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:28:03.934127    8722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:28:03.934149    8722 cni.go:84] Creating CNI manager for ""
	I1209 03:28:03.934168    8722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 03:28:03.934217    8722 start.go:340] cluster config:
	{Name:ha-488000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-488000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:28:03.938750    8722 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:28:03.946446    8722 out.go:177] * Starting "ha-488000" primary control-plane node in "ha-488000" cluster
	I1209 03:28:03.950595    8722 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:28:03.950610    8722 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:28:03.950627    8722 cache.go:56] Caching tarball of preloaded images
	I1209 03:28:03.950683    8722 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:28:03.950691    8722 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:28:03.950753    8722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/ha-488000/config.json ...
	I1209 03:28:03.951267    8722 start.go:360] acquireMachinesLock for ha-488000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:28:03.951298    8722 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "ha-488000"
	I1209 03:28:03.951307    8722 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:28:03.951312    8722 fix.go:54] fixHost starting: 
	I1209 03:28:03.951449    8722 fix.go:112] recreateIfNeeded on ha-488000: state=Stopped err=<nil>
	W1209 03:28:03.951457    8722 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:28:03.958602    8722 out.go:177] * Restarting existing qemu2 VM for "ha-488000" ...
	I1209 03:28:03.962636    8722 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:28:03.962676    8722 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:4a:14:28:1d:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2
	I1209 03:28:03.964908    8722 main.go:141] libmachine: STDOUT: 
	I1209 03:28:03.964928    8722 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:28:03.964956    8722 fix.go:56] duration metric: took 13.642541ms for fixHost
	I1209 03:28:03.964960    8722 start.go:83] releasing machines lock for "ha-488000", held for 13.657458ms
	W1209 03:28:03.964965    8722 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:28:03.965006    8722 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:28:03.965011    8722 start.go:729] Will try again in 5 seconds ...
	I1209 03:28:08.967207    8722 start.go:360] acquireMachinesLock for ha-488000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:28:08.967700    8722 start.go:364] duration metric: took 360.708µs to acquireMachinesLock for "ha-488000"
	I1209 03:28:08.967858    8722 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:28:08.967879    8722 fix.go:54] fixHost starting: 
	I1209 03:28:08.968606    8722 fix.go:112] recreateIfNeeded on ha-488000: state=Stopped err=<nil>
	W1209 03:28:08.968631    8722 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:28:08.974361    8722 out.go:177] * Restarting existing qemu2 VM for "ha-488000" ...
	I1209 03:28:08.978265    8722 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:28:08.978548    8722 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:4a:14:28:1d:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/ha-488000/disk.qcow2
	I1209 03:28:08.989140    8722 main.go:141] libmachine: STDOUT: 
	I1209 03:28:08.989207    8722 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:28:08.989318    8722 fix.go:56] duration metric: took 21.441208ms for fixHost
	I1209 03:28:08.989340    8722 start.go:83] releasing machines lock for "ha-488000", held for 21.614375ms
	W1209 03:28:08.989540    8722 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-488000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-488000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:28:08.996276    8722 out.go:201] 
	W1209 03:28:09.000203    8722 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:28:09.000227    8722 out.go:270] * 
	* 
	W1209 03:28:09.003008    8722 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:28:09.012218    8722 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-488000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (75.512625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-488000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (34.633708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-488000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-488000 --control-plane -v=7 --alsologtostderr: exit status 83 (45.149875ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-488000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-488000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:28:09.223078    8739 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:28:09.223284    8739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:28:09.223287    8739 out.go:358] Setting ErrFile to fd 2...
	I1209 03:28:09.223289    8739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:28:09.223433    8739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:28:09.223662    8739 mustload.go:65] Loading cluster: ha-488000
	I1209 03:28:09.223879    8739 config.go:182] Loaded profile config "ha-488000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:28:09.227469    8739 out.go:177] * The control-plane node ha-488000 host is not running: state=Stopped
	I1209 03:28:09.231406    8739 out.go:177]   To start a cluster, run: "minikube start -p ha-488000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-488000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (34.981334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-488000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-488000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-488000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-488000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-488000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-488000 -n ha-488000: exit status 7 (34.263208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-390000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-390000 --driver=qemu2 : exit status 80 (9.805629958s)

                                                
                                                
-- stdout --
	* [image-390000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-390000" primary control-plane node in "image-390000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-390000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-390000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-390000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-390000 -n image-390000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-390000 -n image-390000: exit status 7 (73.817667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-390000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-275000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-275000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.90641975s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c3e2df30-184a-4f63-87fb-051647817ed9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-275000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"98bcaaa5-a90d-4b95-8b86-bccdbfbe88cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20068"}}
	{"specversion":"1.0","id":"3617535b-5b47-4aea-91ed-8150d64400ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig"}}
	{"specversion":"1.0","id":"06860846-8904-4a47-98bd-92e125f7a04c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"80089f08-8264-4e24-9421-d676e6c04119","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d0259983-e2fa-4456-ab47-130491fcd989","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube"}}
	{"specversion":"1.0","id":"2ee58170-185d-4b6f-a9c6-5fbec6e0ec29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c4837444-fdd0-4f15-830c-1a7fa79fea85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"41ee80b8-76fa-467d-a254-63895816ce8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"f1b9db54-c78a-42b5-b527-968679896cec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-275000\" primary control-plane node in \"json-output-275000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"88b53abc-0649-47e3-afa5-8c30490740d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"f7e8730a-d0df-4226-bc46-44c45a44e93e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-275000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf9791a8-0564-4414-8dc7-cb0c69131521","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"ae867fd8-4648-47bd-824a-6b588ffbd6a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"02b99e31-eaf7-4188-ad7b-cd02dd934fc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-275000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"82239609-f799-4d16-8b6c-caa5b790f28d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"7c56dbfd-652d-4c65-99e8-8dc40b20d164","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-275000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.91s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-275000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-275000 --output=json --user=testUser: exit status 83 (86.950792ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a19f3976-fd06-49ca-8299-c007a6ed9e62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-275000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"ea977862-a348-4416-bd51-4cea07827eb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-275000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-275000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-275000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-275000 --output=json --user=testUser: exit status 83 (50.618875ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-275000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-275000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-275000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-275000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.4s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-139000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-139000 --driver=qemu2 : exit status 80 (10.075860625s)

                                                
                                                
-- stdout --
	* [first-139000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-139000" primary control-plane node in "first-139000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-139000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-139000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-139000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-09 03:28:42.014413 -0800 PST m=+427.166457918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-140000 -n second-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-140000 -n second-140000: exit status 85 (86.99575ms)

                                                
                                                
-- stdout --
	* Profile "second-140000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-140000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-140000" host is not running, skipping log retrieval (state="* Profile \"second-140000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-140000\"")
helpers_test.go:175: Cleaning up "second-140000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-140000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-09 03:28:42.220132 -0800 PST m=+427.372182626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-139000 -n first-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-139000 -n first-139000: exit status 7 (34.706958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-139000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-139000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-139000
--- FAIL: TestMinikubeProfile (10.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-101000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-101000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.020977708s)

                                                
                                                
-- stdout --
	* [mount-start-1-101000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-101000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-101000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-101000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-101000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-101000 -n mount-start-1-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-101000 -n mount-start-1-101000: exit status 7 (76.738833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-101000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.10s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-263000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-263000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.912423458s)

                                                
                                                
-- stdout --
	* [multinode-263000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-263000" primary control-plane node in "multinode-263000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-263000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:28:52.665786    8874 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:28:52.665944    8874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:28:52.665947    8874 out.go:358] Setting ErrFile to fd 2...
	I1209 03:28:52.665950    8874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:28:52.666078    8874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:28:52.667177    8874 out.go:352] Setting JSON to false
	I1209 03:28:52.684961    8874 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5303,"bootTime":1733738429,"procs":539,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:28:52.685032    8874 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:28:52.691429    8874 out.go:177] * [multinode-263000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:28:52.700334    8874 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:28:52.700390    8874 notify.go:220] Checking for updates...
	I1209 03:28:52.707387    8874 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:28:52.711302    8874 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:28:52.715349    8874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:28:52.718444    8874 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:28:52.721379    8874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:28:52.724473    8874 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:28:52.727368    8874 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:28:52.734360    8874 start.go:297] selected driver: qemu2
	I1209 03:28:52.734365    8874 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:28:52.734371    8874 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:28:52.736886    8874 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:28:52.741386    8874 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:28:52.744397    8874 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:28:52.744423    8874 cni.go:84] Creating CNI manager for ""
	I1209 03:28:52.744443    8874 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1209 03:28:52.744448    8874 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 03:28:52.744495    8874 start.go:340] cluster config:
	{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:28:52.749199    8874 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:28:52.757348    8874 out.go:177] * Starting "multinode-263000" primary control-plane node in "multinode-263000" cluster
	I1209 03:28:52.761408    8874 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:28:52.761424    8874 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:28:52.761436    8874 cache.go:56] Caching tarball of preloaded images
	I1209 03:28:52.761526    8874 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:28:52.761536    8874 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:28:52.761732    8874 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/multinode-263000/config.json ...
	I1209 03:28:52.761744    8874 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/multinode-263000/config.json: {Name:mk74e3330b10b86438f0c54244f04a1b81da918b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:28:52.762199    8874 start.go:360] acquireMachinesLock for multinode-263000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:28:52.762247    8874 start.go:364] duration metric: took 41.792µs to acquireMachinesLock for "multinode-263000"
	I1209 03:28:52.762260    8874 start.go:93] Provisioning new machine with config: &{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:28:52.762294    8874 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:28:52.769332    8874 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:28:52.786490    8874 start.go:159] libmachine.API.Create for "multinode-263000" (driver="qemu2")
	I1209 03:28:52.786521    8874 client.go:168] LocalClient.Create starting
	I1209 03:28:52.786616    8874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:28:52.786657    8874 main.go:141] libmachine: Decoding PEM data...
	I1209 03:28:52.786672    8874 main.go:141] libmachine: Parsing certificate...
	I1209 03:28:52.786711    8874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:28:52.786746    8874 main.go:141] libmachine: Decoding PEM data...
	I1209 03:28:52.786755    8874 main.go:141] libmachine: Parsing certificate...
	I1209 03:28:52.787229    8874 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:28:52.949261    8874 main.go:141] libmachine: Creating SSH key...
	I1209 03:28:53.068006    8874 main.go:141] libmachine: Creating Disk image...
	I1209 03:28:53.068012    8874 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:28:53.068250    8874 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2
	I1209 03:28:53.078647    8874 main.go:141] libmachine: STDOUT: 
	I1209 03:28:53.078672    8874 main.go:141] libmachine: STDERR: 
	I1209 03:28:53.078730    8874 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2 +20000M
	I1209 03:28:53.087245    8874 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:28:53.087261    8874 main.go:141] libmachine: STDERR: 
	I1209 03:28:53.087274    8874 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2
	I1209 03:28:53.087278    8874 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:28:53.087291    8874 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:28:53.087324    8874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:a1:e8:4b:dd:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2
	I1209 03:28:53.089238    8874 main.go:141] libmachine: STDOUT: 
	I1209 03:28:53.089255    8874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:28:53.089271    8874 client.go:171] duration metric: took 302.752042ms to LocalClient.Create
	I1209 03:28:55.091409    8874 start.go:128] duration metric: took 2.329150583s to createHost
	I1209 03:28:55.091460    8874 start.go:83] releasing machines lock for "multinode-263000", held for 2.329265042s
	W1209 03:28:55.091523    8874 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:28:55.108807    8874 out.go:177] * Deleting "multinode-263000" in qemu2 ...
	W1209 03:28:55.137470    8874 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:28:55.137491    8874 start.go:729] Will try again in 5 seconds ...
	I1209 03:29:00.139681    8874 start.go:360] acquireMachinesLock for multinode-263000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:29:00.140180    8874 start.go:364] duration metric: took 417.584µs to acquireMachinesLock for "multinode-263000"
	I1209 03:29:00.140319    8874 start.go:93] Provisioning new machine with config: &{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:29:00.140532    8874 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:29:00.160163    8874 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:29:00.209140    8874 start.go:159] libmachine.API.Create for "multinode-263000" (driver="qemu2")
	I1209 03:29:00.209196    8874 client.go:168] LocalClient.Create starting
	I1209 03:29:00.209321    8874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:29:00.209431    8874 main.go:141] libmachine: Decoding PEM data...
	I1209 03:29:00.209446    8874 main.go:141] libmachine: Parsing certificate...
	I1209 03:29:00.209505    8874 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:29:00.209562    8874 main.go:141] libmachine: Decoding PEM data...
	I1209 03:29:00.209577    8874 main.go:141] libmachine: Parsing certificate...
	I1209 03:29:00.210270    8874 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:29:00.385962    8874 main.go:141] libmachine: Creating SSH key...
	I1209 03:29:00.473989    8874 main.go:141] libmachine: Creating Disk image...
	I1209 03:29:00.474000    8874 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:29:00.474243    8874 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2
	I1209 03:29:00.484286    8874 main.go:141] libmachine: STDOUT: 
	I1209 03:29:00.484317    8874 main.go:141] libmachine: STDERR: 
	I1209 03:29:00.484375    8874 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2 +20000M
	I1209 03:29:00.492935    8874 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:29:00.492956    8874 main.go:141] libmachine: STDERR: 
	I1209 03:29:00.492965    8874 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2
	I1209 03:29:00.492971    8874 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:29:00.492981    8874 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:29:00.493016    8874 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:35:12:28:e0:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2
	I1209 03:29:00.494862    8874 main.go:141] libmachine: STDOUT: 
	I1209 03:29:00.494883    8874 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:29:00.494894    8874 client.go:171] duration metric: took 285.7ms to LocalClient.Create
	I1209 03:29:02.497048    8874 start.go:128] duration metric: took 2.356500208s to createHost
	I1209 03:29:02.497106    8874 start.go:83] releasing machines lock for "multinode-263000", held for 2.356964334s
	W1209 03:29:02.497509    8874 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-263000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-263000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:29:02.512265    8874 out.go:201] 
	W1209 03:29:02.516295    8874 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:29:02.516322    8874 out.go:270] * 
	* 
	W1209 03:29:02.518796    8874 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:29:02.530121    8874 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-263000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (73.758666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (88.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (65.748708ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-263000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- rollout status deployment/busybox: exit status 1 (62.497208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.639125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:29:02.810724    7820 retry.go:31] will retry after 1.380899243s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.086917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:29:04.302078    7820 retry.go:31] will retry after 1.394367826s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.681042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:29:05.807432    7820 retry.go:31] will retry after 2.224671975s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.0555ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:29:08.142525    7820 retry.go:31] will retry after 4.330876384s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.414333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:29:12.586156    7820 retry.go:31] will retry after 7.518683032s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.062166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:29:20.214400    7820 retry.go:31] will retry after 11.277626591s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.412667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:29:31.605540    7820 retry.go:31] will retry after 13.41236848s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.88375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:29:45.136962    7820 retry.go:31] will retry after 13.243970066s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.887542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 03:29:58.501359    7820 retry.go:31] will retry after 32.722338023s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.361334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.912583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.118875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.115875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.124875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (34.522417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (88.98s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-263000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.425167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (34.651542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-263000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-263000 -v 3 --alsologtostderr: exit status 83 (49.514459ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-263000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-263000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:30:31.756754    9258 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:30:31.756942    9258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:31.756946    9258 out.go:358] Setting ErrFile to fd 2...
	I1209 03:30:31.756948    9258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:31.757104    9258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:30:31.757359    9258 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:30:31.757569    9258 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:30:31.763282    9258 out.go:177] * The control-plane node multinode-263000 host is not running: state=Stopped
	I1209 03:30:31.768135    9258 out.go:177]   To start a cluster, run: "minikube start -p multinode-263000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-263000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (34.468833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-263000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-263000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.041042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-263000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-263000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-263000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (34.967625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-263000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-263000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-263000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"multinode-263000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (34.767208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status --output json --alsologtostderr: exit status 7 (34.834583ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-263000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:30:31.989901    9270 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:30:31.990064    9270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:31.990067    9270 out.go:358] Setting ErrFile to fd 2...
	I1209 03:30:31.990070    9270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:31.990195    9270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:30:31.990314    9270 out.go:352] Setting JSON to true
	I1209 03:30:31.990324    9270 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:30:31.990368    9270 notify.go:220] Checking for updates...
	I1209 03:30:31.990531    9270 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:30:31.990539    9270 status.go:174] checking status of multinode-263000 ...
	I1209 03:30:31.990802    9270 status.go:371] multinode-263000 host status = "Stopped" (err=<nil>)
	I1209 03:30:31.990806    9270 status.go:384] host is not running, skipping remaining checks
	I1209 03:30:31.990808    9270 status.go:176] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-263000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (34.609959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 node stop m03: exit status 85 (51.647917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-263000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status: exit status 7 (34.91525ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr: exit status 7 (34.74225ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:30:32.146703    9278 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:30:32.146873    9278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:32.146877    9278 out.go:358] Setting ErrFile to fd 2...
	I1209 03:30:32.146879    9278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:32.147021    9278 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:30:32.147143    9278 out.go:352] Setting JSON to false
	I1209 03:30:32.147152    9278 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:30:32.147209    9278 notify.go:220] Checking for updates...
	I1209 03:30:32.147370    9278 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:30:32.147378    9278 status.go:174] checking status of multinode-263000 ...
	I1209 03:30:32.147606    9278 status.go:371] multinode-263000 host status = "Stopped" (err=<nil>)
	I1209 03:30:32.147610    9278 status.go:384] host is not running, skipping remaining checks
	I1209 03:30:32.147612    9278 status.go:176] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr": multinode-263000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (35.308834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 node start m03 -v=7 --alsologtostderr: exit status 85 (49.576959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:30:32.216483    9282 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:30:32.216895    9282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:32.216899    9282 out.go:358] Setting ErrFile to fd 2...
	I1209 03:30:32.216901    9282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:32.217044    9282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:30:32.217281    9282 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:30:32.217500    9282 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:30:32.219108    9282 out.go:201] 
	W1209 03:30:32.222790    9282 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1209 03:30:32.222796    9282 out.go:270] * 
	* 
	W1209 03:30:32.224529    9282 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:30:32.228783    9282 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1209 03:30:32.216483    9282 out.go:345] Setting OutFile to fd 1 ...
I1209 03:30:32.216895    9282 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:30:32.216899    9282 out.go:358] Setting ErrFile to fd 2...
I1209 03:30:32.216901    9282 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 03:30:32.217044    9282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
I1209 03:30:32.217281    9282 mustload.go:65] Loading cluster: multinode-263000
I1209 03:30:32.217500    9282 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 03:30:32.219108    9282 out.go:201] 
W1209 03:30:32.222790    9282 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1209 03:30:32.222796    9282 out.go:270] * 
* 
W1209 03:30:32.224529    9282 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1209 03:30:32.228783    9282 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-263000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (35.076875ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:30:32.267085    9284 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:30:32.267284    9284 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:32.267287    9284 out.go:358] Setting ErrFile to fd 2...
	I1209 03:30:32.267289    9284 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:32.267436    9284 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:30:32.267552    9284 out.go:352] Setting JSON to false
	I1209 03:30:32.267562    9284 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:30:32.267632    9284 notify.go:220] Checking for updates...
	I1209 03:30:32.267771    9284 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:30:32.267778    9284 status.go:174] checking status of multinode-263000 ...
	I1209 03:30:32.268033    9284 status.go:371] multinode-263000 host status = "Stopped" (err=<nil>)
	I1209 03:30:32.268037    9284 status.go:384] host is not running, skipping remaining checks
	I1209 03:30:32.268039    9284 status.go:176] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:30:32.268966    7820 retry.go:31] will retry after 629.748724ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (76.233583ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:30:32.975169    9286 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:30:32.975386    9286 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:32.975391    9286 out.go:358] Setting ErrFile to fd 2...
	I1209 03:30:32.975394    9286 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:32.975555    9286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:30:32.975715    9286 out.go:352] Setting JSON to false
	I1209 03:30:32.975727    9286 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:30:32.975769    9286 notify.go:220] Checking for updates...
	I1209 03:30:32.975997    9286 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:30:32.976005    9286 status.go:174] checking status of multinode-263000 ...
	I1209 03:30:32.976331    9286 status.go:371] multinode-263000 host status = "Stopped" (err=<nil>)
	I1209 03:30:32.976336    9286 status.go:384] host is not running, skipping remaining checks
	I1209 03:30:32.976338    9286 status.go:176] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:30:32.977376    7820 retry.go:31] will retry after 1.765782371s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (76.838291ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:30:34.820331    9288 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:30:34.820559    9288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:34.820563    9288 out.go:358] Setting ErrFile to fd 2...
	I1209 03:30:34.820566    9288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:34.820734    9288 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:30:34.820895    9288 out.go:352] Setting JSON to false
	I1209 03:30:34.820907    9288 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:30:34.820946    9288 notify.go:220] Checking for updates...
	I1209 03:30:34.821158    9288 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:30:34.821167    9288 status.go:174] checking status of multinode-263000 ...
	I1209 03:30:34.821440    9288 status.go:371] multinode-263000 host status = "Stopped" (err=<nil>)
	I1209 03:30:34.821445    9288 status.go:384] host is not running, skipping remaining checks
	I1209 03:30:34.821447    9288 status.go:176] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:30:34.822466    7820 retry.go:31] will retry after 2.661035209s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (79.867541ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:30:37.563582    9290 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:30:37.563837    9290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:37.563842    9290 out.go:358] Setting ErrFile to fd 2...
	I1209 03:30:37.563845    9290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:37.564023    9290 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:30:37.564194    9290 out.go:352] Setting JSON to false
	I1209 03:30:37.564206    9290 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:30:37.564253    9290 notify.go:220] Checking for updates...
	I1209 03:30:37.564495    9290 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:30:37.564505    9290 status.go:174] checking status of multinode-263000 ...
	I1209 03:30:37.564861    9290 status.go:371] multinode-263000 host status = "Stopped" (err=<nil>)
	I1209 03:30:37.564866    9290 status.go:384] host is not running, skipping remaining checks
	I1209 03:30:37.564869    9290 status.go:176] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:30:37.565929    7820 retry.go:31] will retry after 2.378143088s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (79.070917ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:30:40.022224    9292 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:30:40.022454    9292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:40.022458    9292 out.go:358] Setting ErrFile to fd 2...
	I1209 03:30:40.022461    9292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:40.022599    9292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:30:40.022757    9292 out.go:352] Setting JSON to false
	I1209 03:30:40.022769    9292 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:30:40.022806    9292 notify.go:220] Checking for updates...
	I1209 03:30:40.024022    9292 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:30:40.024036    9292 status.go:174] checking status of multinode-263000 ...
	I1209 03:30:40.024330    9292 status.go:371] multinode-263000 host status = "Stopped" (err=<nil>)
	I1209 03:30:40.024337    9292 status.go:384] host is not running, skipping remaining checks
	I1209 03:30:40.024340    9292 status.go:176] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:30:40.025641    7820 retry.go:31] will retry after 3.663419433s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (72.458041ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:30:43.761529    9299 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:30:43.761818    9299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:43.761823    9299 out.go:358] Setting ErrFile to fd 2...
	I1209 03:30:43.761825    9299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:43.762003    9299 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:30:43.762212    9299 out.go:352] Setting JSON to false
	I1209 03:30:43.762233    9299 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:30:43.762279    9299 notify.go:220] Checking for updates...
	I1209 03:30:43.762510    9299 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:30:43.762520    9299 status.go:174] checking status of multinode-263000 ...
	I1209 03:30:43.762848    9299 status.go:371] multinode-263000 host status = "Stopped" (err=<nil>)
	I1209 03:30:43.762853    9299 status.go:384] host is not running, skipping remaining checks
	I1209 03:30:43.762856    9299 status.go:176] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:30:43.763986    7820 retry.go:31] will retry after 4.069041394s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (79.924875ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:30:47.913128    9304 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:30:47.913389    9304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:47.913393    9304 out.go:358] Setting ErrFile to fd 2...
	I1209 03:30:47.913396    9304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:47.913550    9304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:30:47.913741    9304 out.go:352] Setting JSON to false
	I1209 03:30:47.913753    9304 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:30:47.913795    9304 notify.go:220] Checking for updates...
	I1209 03:30:47.914045    9304 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:30:47.914054    9304 status.go:174] checking status of multinode-263000 ...
	I1209 03:30:47.914383    9304 status.go:371] multinode-263000 host status = "Stopped" (err=<nil>)
	I1209 03:30:47.914387    9304 status.go:384] host is not running, skipping remaining checks
	I1209 03:30:47.914390    9304 status.go:176] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:30:47.915409    7820 retry.go:31] will retry after 9.464608309s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (78.08925ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:30:57.458277    9307 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:30:57.458525    9307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:57.458529    9307 out.go:358] Setting ErrFile to fd 2...
	I1209 03:30:57.458533    9307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:30:57.458682    9307 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:30:57.458844    9307 out.go:352] Setting JSON to false
	I1209 03:30:57.458856    9307 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:30:57.458894    9307 notify.go:220] Checking for updates...
	I1209 03:30:57.459112    9307 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:30:57.459122    9307 status.go:174] checking status of multinode-263000 ...
	I1209 03:30:57.459439    9307 status.go:371] multinode-263000 host status = "Stopped" (err=<nil>)
	I1209 03:30:57.459443    9307 status.go:384] host is not running, skipping remaining checks
	I1209 03:30:57.459446    9307 status.go:176] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 03:30:57.460474    7820 retry.go:31] will retry after 15.821784962s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr: exit status 7 (79.074ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:31:13.361429    9310 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:31:13.361680    9310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:31:13.361684    9310 out.go:358] Setting ErrFile to fd 2...
	I1209 03:31:13.361687    9310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:31:13.361842    9310 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:31:13.362021    9310 out.go:352] Setting JSON to false
	I1209 03:31:13.362033    9310 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:31:13.362079    9310 notify.go:220] Checking for updates...
	I1209 03:31:13.362320    9310 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:31:13.362329    9310 status.go:174] checking status of multinode-263000 ...
	I1209 03:31:13.362635    9310 status.go:371] multinode-263000 host status = "Stopped" (err=<nil>)
	I1209 03:31:13.362639    9310 status.go:384] host is not running, skipping remaining checks
	I1209 03:31:13.362642    9310 status.go:176] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-263000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (35.919958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (41.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-263000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-263000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-263000: (3.900673667s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-263000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-263000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.235670042s)

                                                
                                                
-- stdout --
	* [multinode-263000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-263000" primary control-plane node in "multinode-263000" cluster
	* Restarting existing qemu2 VM for "multinode-263000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-263000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:31:17.405137    9334 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:31:17.405319    9334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:31:17.405323    9334 out.go:358] Setting ErrFile to fd 2...
	I1209 03:31:17.405326    9334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:31:17.405464    9334 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:31:17.406729    9334 out.go:352] Setting JSON to false
	I1209 03:31:17.427235    9334 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5448,"bootTime":1733738429,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:31:17.427312    9334 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:31:17.431768    9334 out.go:177] * [multinode-263000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:31:17.438628    9334 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:31:17.438649    9334 notify.go:220] Checking for updates...
	I1209 03:31:17.446681    9334 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:31:17.449663    9334 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:31:17.452693    9334 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:31:17.455566    9334 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:31:17.458669    9334 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:31:17.462007    9334 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:31:17.462064    9334 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:31:17.465647    9334 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:31:17.472622    9334 start.go:297] selected driver: qemu2
	I1209 03:31:17.472627    9334 start.go:901] validating driver "qemu2" against &{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:31:17.472675    9334 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:31:17.475302    9334 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:31:17.475327    9334 cni.go:84] Creating CNI manager for ""
	I1209 03:31:17.475350    9334 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 03:31:17.475408    9334 start.go:340] cluster config:
	{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-263000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:31:17.480127    9334 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:31:17.488633    9334 out.go:177] * Starting "multinode-263000" primary control-plane node in "multinode-263000" cluster
	I1209 03:31:17.492685    9334 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:31:17.492702    9334 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:31:17.492714    9334 cache.go:56] Caching tarball of preloaded images
	I1209 03:31:17.492799    9334 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:31:17.492804    9334 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:31:17.492870    9334 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/multinode-263000/config.json ...
	I1209 03:31:17.493352    9334 start.go:360] acquireMachinesLock for multinode-263000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:31:17.493402    9334 start.go:364] duration metric: took 44.208µs to acquireMachinesLock for "multinode-263000"
	I1209 03:31:17.493411    9334 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:31:17.493416    9334 fix.go:54] fixHost starting: 
	I1209 03:31:17.493543    9334 fix.go:112] recreateIfNeeded on multinode-263000: state=Stopped err=<nil>
	W1209 03:31:17.493552    9334 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:31:17.501688    9334 out.go:177] * Restarting existing qemu2 VM for "multinode-263000" ...
	I1209 03:31:17.505634    9334 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:31:17.505677    9334 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:35:12:28:e0:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2
	I1209 03:31:17.507963    9334 main.go:141] libmachine: STDOUT: 
	I1209 03:31:17.507984    9334 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:31:17.508017    9334 fix.go:56] duration metric: took 14.599291ms for fixHost
	I1209 03:31:17.508021    9334 start.go:83] releasing machines lock for "multinode-263000", held for 14.614333ms
	W1209 03:31:17.508027    9334 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:31:17.508070    9334 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:31:17.508075    9334 start.go:729] Will try again in 5 seconds ...
	I1209 03:31:22.510125    9334 start.go:360] acquireMachinesLock for multinode-263000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:31:22.510604    9334 start.go:364] duration metric: took 386.416µs to acquireMachinesLock for "multinode-263000"
	I1209 03:31:22.510762    9334 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:31:22.510782    9334 fix.go:54] fixHost starting: 
	I1209 03:31:22.511546    9334 fix.go:112] recreateIfNeeded on multinode-263000: state=Stopped err=<nil>
	W1209 03:31:22.511572    9334 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:31:22.520984    9334 out.go:177] * Restarting existing qemu2 VM for "multinode-263000" ...
	I1209 03:31:22.525082    9334 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:31:22.525361    9334 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:35:12:28:e0:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2
	I1209 03:31:22.535053    9334 main.go:141] libmachine: STDOUT: 
	I1209 03:31:22.535107    9334 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:31:22.535165    9334 fix.go:56] duration metric: took 24.38475ms for fixHost
	I1209 03:31:22.535180    9334 start.go:83] releasing machines lock for "multinode-263000", held for 24.513875ms
	W1209 03:31:22.535415    9334 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-263000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-263000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:31:22.543038    9334 out.go:201] 
	W1209 03:31:22.547139    9334 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:31:22.547193    9334 out.go:270] * 
	* 
	W1209 03:31:22.550126    9334 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:31:22.556967    9334 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-263000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-263000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (36.468291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 node delete m03: exit status 83 (44.045625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-263000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-263000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-263000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr: exit status 7 (34.222167ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:31:22.754095    9348 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:31:22.754269    9348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:31:22.754273    9348 out.go:358] Setting ErrFile to fd 2...
	I1209 03:31:22.754275    9348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:31:22.754441    9348 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:31:22.754564    9348 out.go:352] Setting JSON to false
	I1209 03:31:22.754574    9348 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:31:22.754634    9348 notify.go:220] Checking for updates...
	I1209 03:31:22.754780    9348 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:31:22.754787    9348 status.go:174] checking status of multinode-263000 ...
	I1209 03:31:22.755028    9348 status.go:371] multinode-263000 host status = "Stopped" (err=<nil>)
	I1209 03:31:22.755031    9348 status.go:384] host is not running, skipping remaining checks
	I1209 03:31:22.755033    9348 status.go:176] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (34.364792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-263000 stop: (3.417052834s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status: exit status 7 (75.753875ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr: exit status 7 (35.919625ms)

                                                
                                                
-- stdout --
	multinode-263000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:31:26.318418    9374 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:31:26.318588    9374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:31:26.318592    9374 out.go:358] Setting ErrFile to fd 2...
	I1209 03:31:26.318594    9374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:31:26.318715    9374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:31:26.318834    9374 out.go:352] Setting JSON to false
	I1209 03:31:26.318845    9374 mustload.go:65] Loading cluster: multinode-263000
	I1209 03:31:26.318915    9374 notify.go:220] Checking for updates...
	I1209 03:31:26.319036    9374 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:31:26.319044    9374 status.go:174] checking status of multinode-263000 ...
	I1209 03:31:26.319271    9374 status.go:371] multinode-263000 host status = "Stopped" (err=<nil>)
	I1209 03:31:26.319275    9374 status.go:384] host is not running, skipping remaining checks
	I1209 03:31:26.319278    9374 status.go:176] multinode-263000 status: &{Name:multinode-263000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr": multinode-263000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-263000 status --alsologtostderr": multinode-263000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (34.54475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-263000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-263000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.200412208s)

                                                
                                                
-- stdout --
	* [multinode-263000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-263000" primary control-plane node in "multinode-263000" cluster
	* Restarting existing qemu2 VM for "multinode-263000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-263000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:31:26.386577    9378 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:31:26.386725    9378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:31:26.386728    9378 out.go:358] Setting ErrFile to fd 2...
	I1209 03:31:26.386731    9378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:31:26.386876    9378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:31:26.388229    9378 out.go:352] Setting JSON to false
	I1209 03:31:26.407572    9378 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5457,"bootTime":1733738429,"procs":546,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:31:26.407655    9378 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:31:26.412103    9378 out.go:177] * [multinode-263000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:31:26.420084    9378 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:31:26.420140    9378 notify.go:220] Checking for updates...
	I1209 03:31:26.428022    9378 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:31:26.435258    9378 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:31:26.439060    9378 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:31:26.442041    9378 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:31:26.445033    9378 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:31:26.448280    9378 config.go:182] Loaded profile config "multinode-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:31:26.448575    9378 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:31:26.452076    9378 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:31:26.458957    9378 start.go:297] selected driver: qemu2
	I1209 03:31:26.458963    9378 start.go:901] validating driver "qemu2" against &{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:31:26.459011    9378 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:31:26.461556    9378 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:31:26.461633    9378 cni.go:84] Creating CNI manager for ""
	I1209 03:31:26.461659    9378 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 03:31:26.461705    9378 start.go:340] cluster config:
	{Name:multinode-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-263000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:31:26.466162    9378 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:31:26.473952    9378 out.go:177] * Starting "multinode-263000" primary control-plane node in "multinode-263000" cluster
	I1209 03:31:26.477989    9378 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:31:26.478005    9378 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:31:26.478015    9378 cache.go:56] Caching tarball of preloaded images
	I1209 03:31:26.478082    9378 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:31:26.478087    9378 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:31:26.478137    9378 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/multinode-263000/config.json ...
	I1209 03:31:26.478627    9378 start.go:360] acquireMachinesLock for multinode-263000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:31:26.478677    9378 start.go:364] duration metric: took 44.291µs to acquireMachinesLock for "multinode-263000"
	I1209 03:31:26.478686    9378 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:31:26.478692    9378 fix.go:54] fixHost starting: 
	I1209 03:31:26.478818    9378 fix.go:112] recreateIfNeeded on multinode-263000: state=Stopped err=<nil>
	W1209 03:31:26.478827    9378 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:31:26.485971    9378 out.go:177] * Restarting existing qemu2 VM for "multinode-263000" ...
	I1209 03:31:26.490078    9378 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:31:26.490128    9378 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:35:12:28:e0:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2
	I1209 03:31:26.492508    9378 main.go:141] libmachine: STDOUT: 
	I1209 03:31:26.492533    9378 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:31:26.492564    9378 fix.go:56] duration metric: took 13.870958ms for fixHost
	I1209 03:31:26.492568    9378 start.go:83] releasing machines lock for "multinode-263000", held for 13.886042ms
	W1209 03:31:26.492575    9378 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:31:26.492617    9378 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:31:26.492622    9378 start.go:729] Will try again in 5 seconds ...
	I1209 03:31:31.493769    9378 start.go:360] acquireMachinesLock for multinode-263000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:31:31.494254    9378 start.go:364] duration metric: took 333µs to acquireMachinesLock for "multinode-263000"
	I1209 03:31:31.494383    9378 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:31:31.494401    9378 fix.go:54] fixHost starting: 
	I1209 03:31:31.495127    9378 fix.go:112] recreateIfNeeded on multinode-263000: state=Stopped err=<nil>
	W1209 03:31:31.495152    9378 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:31:31.500728    9378 out.go:177] * Restarting existing qemu2 VM for "multinode-263000" ...
	I1209 03:31:31.508699    9378 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:31:31.509003    9378 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:35:12:28:e0:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/multinode-263000/disk.qcow2
	I1209 03:31:31.519288    9378 main.go:141] libmachine: STDOUT: 
	I1209 03:31:31.519348    9378 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:31:31.519429    9378 fix.go:56] duration metric: took 25.026125ms for fixHost
	I1209 03:31:31.519445    9378 start.go:83] releasing machines lock for "multinode-263000", held for 25.17075ms
	W1209 03:31:31.519651    9378 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-263000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-263000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:31:31.526684    9378 out.go:201] 
	W1209 03:31:31.530685    9378 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:31:31.530706    9378 out.go:270] * 
	* 
	W1209 03:31:31.533191    9378 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:31:31.541715    9378 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-263000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (76.965541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-263000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-263000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-263000-m01 --driver=qemu2 : exit status 80 (9.84564875s)

                                                
                                                
-- stdout --
	* [multinode-263000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-263000-m01" primary control-plane node in "multinode-263000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-263000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-263000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-263000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-263000-m02 --driver=qemu2 : exit status 80 (10.064292625s)

                                                
                                                
-- stdout --
	* [multinode-263000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-263000-m02" primary control-plane node in "multinode-263000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-263000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-263000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-263000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-263000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-263000: exit status 83 (88.03425ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-263000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-263000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-263000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-263000 -n multinode-263000: exit status 7 (35.537917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-263000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.16s)

                                                
                                    
x
+
TestPreload (10.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-644000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-644000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.96317225s)

                                                
                                                
-- stdout --
	* [test-preload-644000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-644000" primary control-plane node in "test-preload-644000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-644000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:31:51.942742    9436 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:31:51.942899    9436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:31:51.942902    9436 out.go:358] Setting ErrFile to fd 2...
	I1209 03:31:51.942905    9436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:31:51.943039    9436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:31:51.944167    9436 out.go:352] Setting JSON to false
	I1209 03:31:51.961956    9436 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5482,"bootTime":1733738429,"procs":546,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:31:51.962039    9436 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:31:51.967600    9436 out.go:177] * [test-preload-644000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:31:51.974561    9436 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:31:51.974598    9436 notify.go:220] Checking for updates...
	I1209 03:31:51.982613    9436 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:31:51.986574    9436 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:31:51.990604    9436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:31:51.993610    9436 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:31:51.996627    9436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:31:51.999879    9436 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:31:51.999947    9436 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:31:52.004703    9436 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:31:52.011585    9436 start.go:297] selected driver: qemu2
	I1209 03:31:52.011591    9436 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:31:52.011597    9436 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:31:52.014136    9436 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:31:52.016644    9436 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:31:52.020555    9436 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:31:52.020571    9436 cni.go:84] Creating CNI manager for ""
	I1209 03:31:52.020591    9436 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:31:52.020600    9436 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:31:52.020630    9436 start.go:340] cluster config:
	{Name:test-preload-644000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:31:52.025460    9436 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:31:52.032590    9436 out.go:177] * Starting "test-preload-644000" primary control-plane node in "test-preload-644000" cluster
	I1209 03:31:52.036576    9436 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1209 03:31:52.036663    9436 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/test-preload-644000/config.json ...
	I1209 03:31:52.036671    9436 cache.go:107] acquiring lock: {Name:mkf0ddcf765528f2b9e7d6371fc550b01145cef4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:31:52.036693    9436 cache.go:107] acquiring lock: {Name:mk153c99bfabbb08213c70ff73942038ec901632 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:31:52.036682    9436 cache.go:107] acquiring lock: {Name:mkc8655c572a37c60afa3b79113c48a99aa97ba5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:31:52.036711    9436 cache.go:107] acquiring lock: {Name:mkddc95d96b53a8ca90abf1dc9c0bd93aacb010a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:31:52.036774    9436 cache.go:107] acquiring lock: {Name:mk34b882b6a5bf9b1dc9c1258572f9a28c1d1915 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:31:52.036882    9436 cache.go:107] acquiring lock: {Name:mk47db5443db5a8d4a63cba0f624619dbf0d1c56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:31:52.036886    9436 cache.go:107] acquiring lock: {Name:mk758280fc2f77e2b4ee7a73f2131e50498fb982 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:31:52.036903    9436 cache.go:107] acquiring lock: {Name:mk1feb17acaa3569c88530877c3973609623f5e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:31:52.037264    9436 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1209 03:31:52.037314    9436 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1209 03:31:52.037359    9436 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1209 03:31:52.036680    9436 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/test-preload-644000/config.json: {Name:mkc5ea2f59d7c80bfda3a6af0d640f10ecb71dc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:31:52.037518    9436 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:31:52.037601    9436 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1209 03:31:52.037606    9436 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:31:52.037637    9436 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1209 03:31:52.037602    9436 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:31:52.037818    9436 start.go:360] acquireMachinesLock for test-preload-644000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:31:52.037876    9436 start.go:364] duration metric: took 46.833µs to acquireMachinesLock for "test-preload-644000"
	I1209 03:31:52.037889    9436 start.go:93] Provisioning new machine with config: &{Name:test-preload-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:31:52.037938    9436 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:31:52.045595    9436 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:31:52.049739    9436 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1209 03:31:52.049758    9436 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1209 03:31:52.049816    9436 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:31:52.049853    9436 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:31:52.049893    9436 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1209 03:31:52.049919    9436 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1209 03:31:52.050553    9436 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1209 03:31:52.050675    9436 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:31:52.064740    9436 start.go:159] libmachine.API.Create for "test-preload-644000" (driver="qemu2")
	I1209 03:31:52.064760    9436 client.go:168] LocalClient.Create starting
	I1209 03:31:52.064850    9436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:31:52.064892    9436 main.go:141] libmachine: Decoding PEM data...
	I1209 03:31:52.064908    9436 main.go:141] libmachine: Parsing certificate...
	I1209 03:31:52.064945    9436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:31:52.064977    9436 main.go:141] libmachine: Decoding PEM data...
	I1209 03:31:52.064983    9436 main.go:141] libmachine: Parsing certificate...
	I1209 03:31:52.065464    9436 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:31:52.261591    9436 main.go:141] libmachine: Creating SSH key...
	I1209 03:31:52.411829    9436 main.go:141] libmachine: Creating Disk image...
	I1209 03:31:52.411850    9436 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:31:52.412092    9436 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/disk.qcow2
	I1209 03:31:52.421794    9436 main.go:141] libmachine: STDOUT: 
	I1209 03:31:52.421812    9436 main.go:141] libmachine: STDERR: 
	I1209 03:31:52.421872    9436 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/disk.qcow2 +20000M
	I1209 03:31:52.431724    9436 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:31:52.431746    9436 main.go:141] libmachine: STDERR: 
	I1209 03:31:52.431762    9436 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/disk.qcow2
	I1209 03:31:52.431767    9436 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:31:52.431782    9436 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:31:52.431824    9436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:14:08:eb:01:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/disk.qcow2
	I1209 03:31:52.434236    9436 main.go:141] libmachine: STDOUT: 
	I1209 03:31:52.434252    9436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:31:52.434269    9436 client.go:171] duration metric: took 369.512959ms to LocalClient.Create
	I1209 03:31:52.472815    9436 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1209 03:31:52.503520    9436 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1209 03:31:52.564680    9436 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1209 03:31:52.739143    9436 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1209 03:31:52.741146    9436 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1209 03:31:52.774093    9436 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W1209 03:31:52.859776    9436 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1209 03:31:52.859852    9436 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1209 03:31:52.904786    9436 cache.go:157] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1209 03:31:52.904835    9436 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 868.150958ms
	I1209 03:31:52.904872    9436 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1209 03:31:53.166727    9436 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1209 03:31:53.166820    9436 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 03:31:53.612849    9436 cache.go:157] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1209 03:31:53.612918    9436 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.576272292s
	I1209 03:31:53.612949    9436 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1209 03:31:54.434529    9436 start.go:128] duration metric: took 2.396603292s to createHost
	I1209 03:31:54.434591    9436 start.go:83] releasing machines lock for "test-preload-644000", held for 2.396749083s
	W1209 03:31:54.434653    9436 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:31:54.451536    9436 out.go:177] * Deleting "test-preload-644000" in qemu2 ...
	W1209 03:31:54.484168    9436 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:31:54.484204    9436 start.go:729] Will try again in 5 seconds ...
	I1209 03:31:54.496034    9436 cache.go:157] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1209 03:31:54.496067    9436 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.459314458s
	I1209 03:31:54.496087    9436 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1209 03:31:55.260725    9436 cache.go:157] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1209 03:31:55.260772    9436 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.224045459s
	I1209 03:31:55.260797    9436 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1209 03:31:56.952539    9436 cache.go:157] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1209 03:31:56.952612    9436 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.916024958s
	I1209 03:31:56.952639    9436 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1209 03:31:58.668536    9436 cache.go:157] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1209 03:31:58.668595    9436 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.632030334s
	I1209 03:31:58.668680    9436 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1209 03:31:59.385169    9436 cache.go:157] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1209 03:31:59.385222    9436 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.348496333s
	I1209 03:31:59.385252    9436 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1209 03:31:59.484286    9436 start.go:360] acquireMachinesLock for test-preload-644000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:31:59.484764    9436 start.go:364] duration metric: took 418.584µs to acquireMachinesLock for "test-preload-644000"
	I1209 03:31:59.484822    9436 start.go:93] Provisioning new machine with config: &{Name:test-preload-644000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-644000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:31:59.485087    9436 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:31:59.506633    9436 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:31:59.553503    9436 start.go:159] libmachine.API.Create for "test-preload-644000" (driver="qemu2")
	I1209 03:31:59.553568    9436 client.go:168] LocalClient.Create starting
	I1209 03:31:59.553700    9436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:31:59.553784    9436 main.go:141] libmachine: Decoding PEM data...
	I1209 03:31:59.553806    9436 main.go:141] libmachine: Parsing certificate...
	I1209 03:31:59.553873    9436 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:31:59.553929    9436 main.go:141] libmachine: Decoding PEM data...
	I1209 03:31:59.553948    9436 main.go:141] libmachine: Parsing certificate...
	I1209 03:31:59.554548    9436 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:31:59.726728    9436 main.go:141] libmachine: Creating SSH key...
	I1209 03:31:59.796650    9436 main.go:141] libmachine: Creating Disk image...
	I1209 03:31:59.796660    9436 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:31:59.796877    9436 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/disk.qcow2
	I1209 03:31:59.806780    9436 main.go:141] libmachine: STDOUT: 
	I1209 03:31:59.806800    9436 main.go:141] libmachine: STDERR: 
	I1209 03:31:59.806869    9436 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/disk.qcow2 +20000M
	I1209 03:31:59.815451    9436 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:31:59.815468    9436 main.go:141] libmachine: STDERR: 
	I1209 03:31:59.815482    9436 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/disk.qcow2
	I1209 03:31:59.815488    9436 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:31:59.815498    9436 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:31:59.815534    9436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:fd:57:83:82:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/test-preload-644000/disk.qcow2
	I1209 03:31:59.817394    9436 main.go:141] libmachine: STDOUT: 
	I1209 03:31:59.817408    9436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:31:59.817421    9436 client.go:171] duration metric: took 263.852458ms to LocalClient.Create
	I1209 03:32:01.818509    9436 start.go:128] duration metric: took 2.333431334s to createHost
	I1209 03:32:01.818583    9436 start.go:83] releasing machines lock for "test-preload-644000", held for 2.333841333s
	W1209 03:32:01.818878    9436 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:32:01.835440    9436 out.go:201] 
	W1209 03:32:01.840487    9436 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:32:01.840513    9436 out.go:270] * 
	* 
	W1209 03:32:01.843361    9436 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:32:01.857247    9436 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-644000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-12-09 03:32:01.875517 -0800 PST m=+627.005272793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-644000 -n test-preload-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-644000 -n test-preload-644000: exit status 7 (72.583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-644000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-644000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-644000
--- FAIL: TestPreload (10.12s)

                                                
                                    
x
+
TestScheduledStopUnix (10.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-146000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-146000 --memory=2048 --driver=qemu2 : exit status 80 (9.97112275s)

                                                
                                                
-- stdout --
	* [scheduled-stop-146000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-146000" primary control-plane node in "scheduled-stop-146000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-146000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-146000" primary control-plane node in "scheduled-stop-146000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-12-09 03:32:12.005771 -0800 PST m=+637.135715460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-146000 -n scheduled-stop-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-146000 -n scheduled-stop-146000: exit status 7 (71.900209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-146000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-146000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-146000
--- FAIL: TestScheduledStopUnix (10.13s)

                                                
                                    
x
+
TestSkaffold (12.28s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2894197401 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2894197401 version: (1.011271042s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-754000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-754000 --memory=2600 --driver=qemu2 : exit status 80 (9.827582125s)

                                                
                                                
-- stdout --
	* [skaffold-754000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-754000" primary control-plane node in "skaffold-754000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-754000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-754000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-754000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-754000" primary control-plane node in "skaffold-754000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-754000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-754000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-12-09 03:32:24.284906 -0800 PST m=+649.415080085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-754000 -n skaffold-754000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-754000 -n skaffold-754000: exit status 7 (68.4065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-754000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-754000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-754000
--- FAIL: TestSkaffold (12.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (627.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.112025581 start -p running-upgrade-765000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.112025581 start -p running-upgrade-765000 --memory=2200 --vm-driver=qemu2 : (1m3.157774875s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-765000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-765000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m50.033304458s)

                                                
                                                
-- stdout --
	* [running-upgrade-765000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-765000" primary control-plane node in "running-upgrade-765000" cluster
	* Updating the running qemu2 "running-upgrade-765000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:33:50.714908    9658 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:33:50.715086    9658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:33:50.715093    9658 out.go:358] Setting ErrFile to fd 2...
	I1209 03:33:50.715096    9658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:33:50.715242    9658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:33:50.716342    9658 out.go:352] Setting JSON to false
	I1209 03:33:50.735212    9658 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5601,"bootTime":1733738429,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:33:50.735299    9658 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:33:50.739376    9658 out.go:177] * [running-upgrade-765000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:33:50.747323    9658 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:33:50.747441    9658 notify.go:220] Checking for updates...
	I1209 03:33:50.753336    9658 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:33:50.757355    9658 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:33:50.760382    9658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:33:50.764343    9658 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:33:50.767353    9658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:33:50.771715    9658 config.go:182] Loaded profile config "running-upgrade-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 03:33:50.774333    9658 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 03:33:50.777362    9658 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:33:50.781383    9658 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:33:50.788350    9658 start.go:297] selected driver: qemu2
	I1209 03:33:50.788355    9658 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60625 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 03:33:50.788396    9658 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:33:50.790970    9658 cni.go:84] Creating CNI manager for ""
	I1209 03:33:50.791006    9658 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:33:50.791042    9658 start.go:340] cluster config:
	{Name:running-upgrade-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60625 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 03:33:50.791097    9658 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:33:50.799340    9658 out.go:177] * Starting "running-upgrade-765000" primary control-plane node in "running-upgrade-765000" cluster
	I1209 03:33:50.803387    9658 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1209 03:33:50.803407    9658 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1209 03:33:50.803415    9658 cache.go:56] Caching tarball of preloaded images
	I1209 03:33:50.803471    9658 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:33:50.803476    9658 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1209 03:33:50.803532    9658 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/config.json ...
	I1209 03:33:50.803928    9658 start.go:360] acquireMachinesLock for running-upgrade-765000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:34:03.923046    9658 start.go:364] duration metric: took 13.11935275s to acquireMachinesLock for "running-upgrade-765000"
	I1209 03:34:03.923066    9658 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:34:03.923074    9658 fix.go:54] fixHost starting: 
	I1209 03:34:03.923741    9658 fix.go:112] recreateIfNeeded on running-upgrade-765000: state=Running err=<nil>
	W1209 03:34:03.923754    9658 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:34:03.928323    9658 out.go:177] * Updating the running qemu2 "running-upgrade-765000" VM ...
	I1209 03:34:03.936145    9658 machine.go:93] provisionDockerMachine start ...
	I1209 03:34:03.936218    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:03.936328    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:03.936332    9658 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 03:34:04.005130    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-765000
	
	I1209 03:34:04.005145    9658 buildroot.go:166] provisioning hostname "running-upgrade-765000"
	I1209 03:34:04.005205    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:04.005323    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:04.005331    9658 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-765000 && echo "running-upgrade-765000" | sudo tee /etc/hostname
	I1209 03:34:04.077834    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-765000
	
	I1209 03:34:04.077906    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:04.078105    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:04.078116    9658 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-765000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-765000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-765000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 03:34:04.145380    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:34:04.145393    9658 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20068-6536/.minikube CaCertPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20068-6536/.minikube}
	I1209 03:34:04.145402    9658 buildroot.go:174] setting up certificates
	I1209 03:34:04.145407    9658 provision.go:84] configureAuth start
	I1209 03:34:04.145411    9658 provision.go:143] copyHostCerts
	I1209 03:34:04.145483    9658 exec_runner.go:144] found /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.pem, removing ...
	I1209 03:34:04.145492    9658 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.pem
	I1209 03:34:04.145946    9658 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.pem (1078 bytes)
	I1209 03:34:04.146176    9658 exec_runner.go:144] found /Users/jenkins/minikube-integration/20068-6536/.minikube/cert.pem, removing ...
	I1209 03:34:04.146180    9658 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20068-6536/.minikube/cert.pem
	I1209 03:34:04.146228    9658 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20068-6536/.minikube/cert.pem (1123 bytes)
	I1209 03:34:04.146366    9658 exec_runner.go:144] found /Users/jenkins/minikube-integration/20068-6536/.minikube/key.pem, removing ...
	I1209 03:34:04.146369    9658 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20068-6536/.minikube/key.pem
	I1209 03:34:04.146416    9658 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20068-6536/.minikube/key.pem (1675 bytes)
	I1209 03:34:04.146525    9658 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-765000 san=[127.0.0.1 localhost minikube running-upgrade-765000]
	I1209 03:34:04.206787    9658 provision.go:177] copyRemoteCerts
	I1209 03:34:04.206839    9658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 03:34:04.206848    9658 sshutil.go:53] new ssh client: &{IP:localhost Port:60526 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/running-upgrade-765000/id_rsa Username:docker}
	I1209 03:34:04.244513    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 03:34:04.251909    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 03:34:04.258628    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 03:34:04.265891    9658 provision.go:87] duration metric: took 120.48125ms to configureAuth
	I1209 03:34:04.265900    9658 buildroot.go:189] setting minikube options for container-runtime
	I1209 03:34:04.266013    9658 config.go:182] Loaded profile config "running-upgrade-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 03:34:04.266070    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:04.266167    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:04.266171    9658 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1209 03:34:04.334558    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1209 03:34:04.334567    9658 buildroot.go:70] root file system type: tmpfs
	I1209 03:34:04.334623    9658 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1209 03:34:04.334685    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:04.334801    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:04.334834    9658 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1209 03:34:04.409065    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1209 03:34:04.409144    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:04.409261    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:04.409270    9658 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1209 03:34:04.480001    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:34:04.480013    9658 machine.go:96] duration metric: took 543.871958ms to provisionDockerMachine
	I1209 03:34:04.480020    9658 start.go:293] postStartSetup for "running-upgrade-765000" (driver="qemu2")
	I1209 03:34:04.480026    9658 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 03:34:04.480103    9658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 03:34:04.480112    9658 sshutil.go:53] new ssh client: &{IP:localhost Port:60526 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/running-upgrade-765000/id_rsa Username:docker}
	I1209 03:34:04.516161    9658 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 03:34:04.517459    9658 info.go:137] Remote host: Buildroot 2021.02.12
	I1209 03:34:04.517468    9658 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20068-6536/.minikube/addons for local assets ...
	I1209 03:34:04.517544    9658 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20068-6536/.minikube/files for local assets ...
	I1209 03:34:04.517629    9658 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem -> 78202.pem in /etc/ssl/certs
	I1209 03:34:04.517725    9658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 03:34:04.520320    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem --> /etc/ssl/certs/78202.pem (1708 bytes)
	I1209 03:34:04.528027    9658 start.go:296] duration metric: took 48.002458ms for postStartSetup
	I1209 03:34:04.528042    9658 fix.go:56] duration metric: took 604.983958ms for fixHost
	I1209 03:34:04.528089    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:04.528190    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:04.528196    9658 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 03:34:04.597091    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733744044.426322980
	
	I1209 03:34:04.597099    9658 fix.go:216] guest clock: 1733744044.426322980
	I1209 03:34:04.597105    9658 fix.go:229] Guest: 2024-12-09 03:34:04.42632298 -0800 PST Remote: 2024-12-09 03:34:04.528044 -0800 PST m=+13.839284959 (delta=-101.72102ms)
	I1209 03:34:04.597116    9658 fix.go:200] guest clock delta is within tolerance: -101.72102ms
	I1209 03:34:04.597121    9658 start.go:83] releasing machines lock for "running-upgrade-765000", held for 674.078292ms
	I1209 03:34:04.597197    9658 ssh_runner.go:195] Run: cat /version.json
	I1209 03:34:04.597206    9658 sshutil.go:53] new ssh client: &{IP:localhost Port:60526 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/running-upgrade-765000/id_rsa Username:docker}
	I1209 03:34:04.597197    9658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 03:34:04.597232    9658 sshutil.go:53] new ssh client: &{IP:localhost Port:60526 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/running-upgrade-765000/id_rsa Username:docker}
	W1209 03:34:04.597736    9658 sshutil.go:64] dial failure (will retry): dial tcp [::1]:60526: connect: connection refused
	I1209 03:34:04.597754    9658 retry.go:31] will retry after 286.111134ms: dial tcp [::1]:60526: connect: connection refused
	W1209 03:34:04.630614    9658 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1209 03:34:04.630661    9658 ssh_runner.go:195] Run: systemctl --version
	I1209 03:34:04.632429    9658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 03:34:04.634038    9658 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 03:34:04.634070    9658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1209 03:34:04.637276    9658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1209 03:34:04.641949    9658 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 03:34:04.641956    9658 start.go:495] detecting cgroup driver to use...
	I1209 03:34:04.642030    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 03:34:04.647249    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1209 03:34:04.650000    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 03:34:04.652995    9658 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 03:34:04.653024    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 03:34:04.656395    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 03:34:04.659772    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 03:34:04.663025    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 03:34:04.666022    9658 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 03:34:04.669377    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 03:34:04.672229    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 03:34:04.675733    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 03:34:04.679135    9658 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 03:34:04.682127    9658 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 03:34:04.684786    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:04.787847    9658 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 03:34:04.799515    9658 start.go:495] detecting cgroup driver to use...
	I1209 03:34:04.799597    9658 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1209 03:34:04.807272    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 03:34:04.811762    9658 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 03:34:04.818502    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 03:34:04.823541    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 03:34:04.828464    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 03:34:04.834086    9658 ssh_runner.go:195] Run: which cri-dockerd
	I1209 03:34:04.835434    9658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1209 03:34:04.838016    9658 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1209 03:34:04.843140    9658 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1209 03:34:04.945512    9658 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1209 03:34:05.051811    9658 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1209 03:34:05.051867    9658 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1209 03:34:05.058554    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:05.164918    9658 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 03:34:21.484796    9658 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.320166583s)
	I1209 03:34:21.484882    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1209 03:34:21.491283    9658 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1209 03:34:21.498798    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 03:34:21.504374    9658 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1209 03:34:21.577634    9658 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1209 03:34:21.638515    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:21.727938    9658 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1209 03:34:21.734838    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 03:34:21.739566    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:21.819646    9658 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1209 03:34:21.862263    9658 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1209 03:34:21.862374    9658 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1209 03:34:21.864493    9658 start.go:563] Will wait 60s for crictl version
	I1209 03:34:21.864562    9658 ssh_runner.go:195] Run: which crictl
	I1209 03:34:21.866295    9658 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 03:34:21.878885    9658 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1209 03:34:21.878962    9658 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 03:34:21.891945    9658 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 03:34:21.909032    9658 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1209 03:34:21.909168    9658 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1209 03:34:21.910661    9658 kubeadm.go:883] updating cluster {Name:running-upgrade-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60625 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1209 03:34:21.910701    9658 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1209 03:34:21.910746    9658 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 03:34:21.928820    9658 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 03:34:21.928830    9658 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 03:34:21.928903    9658 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 03:34:21.932265    9658 ssh_runner.go:195] Run: which lz4
	I1209 03:34:21.933949    9658 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 03:34:21.935082    9658 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 03:34:21.935092    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1209 03:34:22.904189    9658 docker.go:653] duration metric: took 970.299125ms to copy over tarball
	I1209 03:34:22.904262    9658 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 03:34:24.031865    9658 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.127608917s)
	I1209 03:34:24.031879    9658 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 03:34:24.047533    9658 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 03:34:24.050374    9658 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1209 03:34:24.055457    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:24.141701    9658 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 03:34:25.735901    9658 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.594212875s)
	I1209 03:34:25.736012    9658 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 03:34:25.754507    9658 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 03:34:25.754517    9658 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 03:34:25.754522    9658 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 03:34:25.760127    9658 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:25.763291    9658 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:25.765776    9658 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:25.765818    9658 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:25.767664    9658 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:25.767581    9658 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:25.769397    9658 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:25.770127    9658 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:25.770387    9658 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:25.770882    9658 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:25.771936    9658 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:25.772065    9658 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:25.772941    9658 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1209 03:34:25.773086    9658 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:25.774252    9658 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:25.774584    9658 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1209 03:34:26.366021    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:26.372345    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:26.379720    9658 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1209 03:34:26.379758    9658 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:26.379862    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:26.387590    9658 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1209 03:34:26.387609    9658 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:26.387663    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:26.392829    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:26.401690    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1209 03:34:26.406041    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1209 03:34:26.412692    9658 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1209 03:34:26.412727    9658 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:26.412797    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:26.422513    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1209 03:34:26.455672    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:26.465946    9658 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1209 03:34:26.465970    9658 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:26.466033    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:26.476349    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1209 03:34:26.485895    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:26.496813    9658 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1209 03:34:26.496831    9658 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:26.496896    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:26.506902    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1209 03:34:26.548356    9658 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1209 03:34:26.548510    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:26.559709    9658 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1209 03:34:26.559733    9658 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:26.559797    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:26.570314    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1209 03:34:26.570439    9658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1209 03:34:26.572092    9658 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1209 03:34:26.572104    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1209 03:34:26.619306    9658 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1209 03:34:26.619319    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1209 03:34:26.630221    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1209 03:34:26.666193    9658 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1209 03:34:26.666233    9658 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1209 03:34:26.666254    9658 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1209 03:34:26.666317    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1209 03:34:26.678424    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1209 03:34:26.678552    9658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1209 03:34:26.680065    9658 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1209 03:34:26.680086    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1209 03:34:26.687677    9658 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1209 03:34:26.687684    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W1209 03:34:26.688307    9658 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1209 03:34:26.688422    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:26.718311    9658 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1209 03:34:26.718349    9658 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1209 03:34:26.718368    9658 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:26.718435    9658 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:26.736813    9658 cache_images.go:92] duration metric: took 982.300958ms to LoadCachedImages
	W1209 03:34:26.736857    9658 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1209 03:34:26.736862    9658 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1209 03:34:26.736917    9658 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-765000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 03:34:26.737000    9658 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1209 03:34:26.750738    9658 cni.go:84] Creating CNI manager for ""
	I1209 03:34:26.750751    9658 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:34:26.750760    9658 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 03:34:26.750768    9658 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-765000 NodeName:running-upgrade-765000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 03:34:26.750847    9658 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-765000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 03:34:26.750916    9658 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1209 03:34:26.754540    9658 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 03:34:26.754578    9658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 03:34:26.757672    9658 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1209 03:34:26.763427    9658 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 03:34:26.768487    9658 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1209 03:34:26.773609    9658 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1209 03:34:26.774821    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:26.858362    9658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:34:26.864194    9658 certs.go:68] Setting up /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000 for IP: 10.0.2.15
	I1209 03:34:26.864206    9658 certs.go:194] generating shared ca certs ...
	I1209 03:34:26.864215    9658 certs.go:226] acquiring lock for ca certs: {Name:mkab7ef03786804c126b265c91619df81c881ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:26.864370    9658 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.key
	I1209 03:34:26.864612    9658 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/proxy-client-ca.key
	I1209 03:34:26.864622    9658 certs.go:256] generating profile certs ...
	I1209 03:34:26.864804    9658 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/client.key
	I1209 03:34:26.864819    9658 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.key.7fa6ebc5
	I1209 03:34:26.864831    9658 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.crt.7fa6ebc5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1209 03:34:26.995838    9658 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.crt.7fa6ebc5 ...
	I1209 03:34:26.995847    9658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.crt.7fa6ebc5: {Name:mk3d8b0b158c1e7ed7c5c1d9d3c8299c2774743f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:26.996194    9658 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.key.7fa6ebc5 ...
	I1209 03:34:26.996200    9658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.key.7fa6ebc5: {Name:mk5e0412c77b429448e56f506b3d7f4b764e026f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:26.996372    9658 certs.go:381] copying /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.crt.7fa6ebc5 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.crt
	I1209 03:34:26.996509    9658 certs.go:385] copying /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.key.7fa6ebc5 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.key
	I1209 03:34:26.996865    9658 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/proxy-client.key
	I1209 03:34:26.997022    9658 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/7820.pem (1338 bytes)
	W1209 03:34:26.997199    9658 certs.go:480] ignoring /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/7820_empty.pem, impossibly tiny 0 bytes
	I1209 03:34:26.997205    9658 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 03:34:26.997376    9658 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem (1078 bytes)
	I1209 03:34:26.997567    9658 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem (1123 bytes)
	I1209 03:34:26.998237    9658 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/key.pem (1675 bytes)
	I1209 03:34:26.998354    9658 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem (1708 bytes)
	I1209 03:34:26.998920    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 03:34:27.006287    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 03:34:27.013070    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 03:34:27.019718    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 03:34:27.026259    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 03:34:27.033368    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 03:34:27.040667    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 03:34:27.047400    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 03:34:27.054166    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/7820.pem --> /usr/share/ca-certificates/7820.pem (1338 bytes)
	I1209 03:34:27.061132    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem --> /usr/share/ca-certificates/78202.pem (1708 bytes)
	I1209 03:34:27.067725    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 03:34:27.074521    9658 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 03:34:27.079749    9658 ssh_runner.go:195] Run: openssl version
	I1209 03:34:27.081414    9658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7820.pem && ln -fs /usr/share/ca-certificates/7820.pem /etc/ssl/certs/7820.pem"
	I1209 03:34:27.084961    9658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7820.pem
	I1209 03:34:27.086525    9658 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 11:22 /usr/share/ca-certificates/7820.pem
	I1209 03:34:27.086573    9658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7820.pem
	I1209 03:34:27.088520    9658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7820.pem /etc/ssl/certs/51391683.0"
	I1209 03:34:27.091161    9658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78202.pem && ln -fs /usr/share/ca-certificates/78202.pem /etc/ssl/certs/78202.pem"
	I1209 03:34:27.094269    9658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78202.pem
	I1209 03:34:27.095923    9658 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 11:22 /usr/share/ca-certificates/78202.pem
	I1209 03:34:27.095960    9658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78202.pem
	I1209 03:34:27.097849    9658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78202.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 03:34:27.100877    9658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 03:34:27.103712    9658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:34:27.105056    9658 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:34:27.105081    9658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:34:27.106835    9658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 03:34:27.109732    9658 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 03:34:27.111063    9658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 03:34:27.112901    9658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 03:34:27.115015    9658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 03:34:27.116809    9658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 03:34:27.120216    9658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 03:34:27.121772    9658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 03:34:27.123692    9658 kubeadm.go:392] StartCluster: {Name:running-upgrade-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60625 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 03:34:27.123774    9658 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 03:34:27.134195    9658 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 03:34:27.137397    9658 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 03:34:27.137412    9658 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 03:34:27.137447    9658 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 03:34:27.140135    9658 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:34:27.140508    9658 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-765000" does not appear in /Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:34:27.140602    9658 kubeconfig.go:62] /Users/jenkins/minikube-integration/20068-6536/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-765000" cluster setting kubeconfig missing "running-upgrade-765000" context setting]
	I1209 03:34:27.140793    9658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/kubeconfig: {Name:mkcab6edf2c02dd56919c96ee93c72d0b668d23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:27.141235    9658 kapi.go:59] client config for running-upgrade-765000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/client.key", CAFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10431f740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 03:34:27.141692    9658 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 03:34:27.144552    9658 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-765000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1209 03:34:27.144562    9658 kubeadm.go:1160] stopping kube-system containers ...
	I1209 03:34:27.144612    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 03:34:27.155897    9658 docker.go:483] Stopping containers: [1c740c03f549 17f5919310d0 f8298f4cf6b7 4a0867c31619 a42b643cfd15 ea74a1ab70f6 d3cb70f32269 7308bbae6c56 67a9fa94ff40 8c650fdc680b a11102cde514 1a73eb25f21c d3844cda8a7f 3aedd3462ec5 3305f5c92771 f22821d4ef46 41e895ffc8b0 7fa6b2f2ffef 499eb08d6e00 220cd1904346 266a6560f67c 76a9b1fd66d5 759fee327ac1 7ffc44e0f4b3 4cc6da64f4fb]
	I1209 03:34:27.155978    9658 ssh_runner.go:195] Run: docker stop 1c740c03f549 17f5919310d0 f8298f4cf6b7 4a0867c31619 a42b643cfd15 ea74a1ab70f6 d3cb70f32269 7308bbae6c56 67a9fa94ff40 8c650fdc680b a11102cde514 1a73eb25f21c d3844cda8a7f 3aedd3462ec5 3305f5c92771 f22821d4ef46 41e895ffc8b0 7fa6b2f2ffef 499eb08d6e00 220cd1904346 266a6560f67c 76a9b1fd66d5 759fee327ac1 7ffc44e0f4b3 4cc6da64f4fb
	I1209 03:34:27.167451    9658 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 03:34:27.254344    9658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 03:34:27.258276    9658 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Dec  9 11:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Dec  9 11:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec  9 11:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Dec  9 11:33 /etc/kubernetes/scheduler.conf
	
	I1209 03:34:27.258316    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/admin.conf
	I1209 03:34:27.261579    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:34:27.261617    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 03:34:27.264900    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/kubelet.conf
	I1209 03:34:27.268049    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:34:27.268085    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 03:34:27.270739    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/controller-manager.conf
	I1209 03:34:27.273504    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:34:27.273532    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 03:34:27.276948    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/scheduler.conf
	I1209 03:34:27.279625    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:34:27.279652    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 03:34:27.282209    9658 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 03:34:27.285621    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:27.309121    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:27.736933    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:27.974120    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:28.005576    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:28.032764    9658 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:34:28.032851    9658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:34:28.534917    9658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:34:29.034900    9658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:34:29.039632    9658 api_server.go:72] duration metric: took 1.006890083s to wait for apiserver process to appear ...
	I1209 03:34:29.039641    9658 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:34:29.039657    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:34.041622    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:34.041644    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:39.042110    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:39.042130    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:44.042419    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:44.042462    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:49.042923    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:49.042989    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:54.043691    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:54.043796    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:59.045403    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:59.045440    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:04.046805    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:04.046906    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:09.049310    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:09.049402    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:14.051792    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:14.051814    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:19.052117    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:19.052206    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:24.054755    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:24.054852    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:29.056561    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:29.056946    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:29.088581    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:35:29.088729    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:29.107892    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:35:29.108001    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:29.122174    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:35:29.122244    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:29.134154    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:35:29.134236    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:29.144825    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:35:29.144901    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:29.156100    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:35:29.156185    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:29.166729    9658 logs.go:282] 0 containers: []
	W1209 03:35:29.166741    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:29.166812    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:29.185572    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:35:29.185587    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:35:29.185593    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:35:29.197304    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:35:29.197317    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:29.211014    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:29.211025    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:29.250961    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:35:29.250971    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:35:29.292966    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:35:29.292978    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:35:29.308524    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:35:29.308536    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:35:29.325338    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:35:29.325349    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:35:29.337436    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:29.337444    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:29.341753    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:29.341762    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:29.441603    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:35:29.441616    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:35:29.463447    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:35:29.463458    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:35:29.482623    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:35:29.482633    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:35:29.499240    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:35:29.499253    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:35:29.512119    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:35:29.512132    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:35:29.532338    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:35:29.532350    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:35:29.545374    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:35:29.545386    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:35:29.560960    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:35:29.560975    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:35:29.573997    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:35:29.574010    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:35:29.589188    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:29.589199    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:32.118371    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:37.120562    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:37.120841    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:37.142542    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:35:37.142669    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:37.157534    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:35:37.157616    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:37.169865    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:35:37.169966    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:37.182459    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:35:37.182537    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:37.193137    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:35:37.193216    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:37.203735    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:35:37.203808    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:37.214331    9658 logs.go:282] 0 containers: []
	W1209 03:35:37.214342    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:37.214408    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:37.226787    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:35:37.226804    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:35:37.226810    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:35:37.238170    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:35:37.238182    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:35:37.257613    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:35:37.257622    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:35:37.269663    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:35:37.269676    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:35:37.281855    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:35:37.281866    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:35:37.299546    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:37.299556    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:37.342344    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:35:37.342356    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:35:37.354097    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:37.354109    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:37.381104    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:35:37.381121    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:35:37.420494    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:37.420512    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:37.425805    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:35:37.425815    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:35:37.438437    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:35:37.438448    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:35:37.450176    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:37.450186    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:37.494239    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:35:37.494268    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:35:37.516302    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:35:37.516314    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:35:37.531826    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:35:37.531841    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:35:37.550639    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:35:37.550652    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:35:37.562843    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:35:37.562853    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:37.577271    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:35:37.577279    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:35:40.094599    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:45.096993    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:45.097304    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:45.124809    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:35:45.124950    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:45.142567    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:35:45.142667    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:45.155624    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:35:45.155713    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:45.167149    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:35:45.167236    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:45.178109    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:35:45.178185    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:45.189570    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:35:45.189655    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:45.199919    9658 logs.go:282] 0 containers: []
	W1209 03:35:45.199930    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:45.199999    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:45.210351    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:35:45.210368    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:45.210373    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:45.251818    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:45.251830    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:45.289245    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:35:45.289258    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:35:45.304166    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:35:45.304183    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:35:45.341214    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:35:45.341226    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:35:45.356004    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:35:45.356019    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:35:45.368355    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:35:45.368370    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:35:45.392887    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:35:45.392897    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:35:45.405389    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:35:45.405401    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:35:45.423845    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:35:45.423864    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:35:45.436202    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:45.436216    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:45.462569    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:35:45.462582    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:35:45.476740    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:35:45.476759    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:35:45.495883    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:35:45.495899    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:45.508739    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:35:45.508755    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:35:45.523480    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:35:45.523490    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:35:45.549382    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:35:45.549397    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:35:45.562533    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:45.562545    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:45.568633    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:35:45.568642    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:35:48.082949    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:53.085174    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:53.085421    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:53.108231    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:35:53.108361    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:53.125219    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:35:53.125320    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:53.138621    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:35:53.138714    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:53.153746    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:35:53.153832    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:53.164573    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:35:53.164654    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:53.175051    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:35:53.175142    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:53.188741    9658 logs.go:282] 0 containers: []
	W1209 03:35:53.188753    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:53.188824    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:53.199438    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:35:53.199461    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:53.199466    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:53.239470    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:53.239490    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:53.281446    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:35:53.281462    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:35:53.293656    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:35:53.293669    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:35:53.306205    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:35:53.306218    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:35:53.318457    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:35:53.318469    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:35:53.355051    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:35:53.355072    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:35:53.371090    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:35:53.371102    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:35:53.383223    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:35:53.383237    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:35:53.403224    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:53.403237    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:53.431736    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:35:53.431749    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:35:53.444416    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:53.444427    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:53.448807    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:35:53.448816    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:35:53.463347    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:35:53.463363    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:35:53.478853    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:35:53.478867    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:35:53.490995    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:35:53.491007    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:35:53.504978    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:35:53.504990    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:35:53.525804    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:35:53.525815    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:35:53.548375    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:35:53.548395    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:56.062892    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:01.065508    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:01.065813    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:01.091134    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:01.091276    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:01.108425    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:01.108532    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:01.125100    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:01.125193    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:01.136392    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:01.136473    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:01.148283    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:01.148367    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:01.160180    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:01.160270    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:01.171618    9658 logs.go:282] 0 containers: []
	W1209 03:36:01.171633    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:01.171710    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:01.182803    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:01.182822    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:01.182829    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:01.187888    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:01.187898    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:01.203085    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:01.203102    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:01.223369    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:01.223383    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:01.239521    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:01.239534    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:01.258291    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:01.258302    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:01.271163    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:01.271174    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:01.299173    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:01.299189    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:01.339687    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:01.339698    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:01.355218    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:01.355235    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:01.380178    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:01.380190    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:01.396883    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:01.396893    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:01.414459    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:01.414470    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:01.428378    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:01.428389    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:01.440071    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:01.440082    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:01.451806    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:01.451818    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:01.496495    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:01.496517    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:01.539746    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:01.539764    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:01.556844    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:01.556858    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:04.071736    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:09.073925    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:09.074105    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:09.092985    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:09.093092    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:09.111293    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:09.111384    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:09.123385    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:09.123474    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:09.135259    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:09.135333    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:09.157518    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:09.157566    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:09.169066    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:09.169141    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:09.180345    9658 logs.go:282] 0 containers: []
	W1209 03:36:09.180358    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:09.180433    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:09.192649    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:09.192665    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:09.192670    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:09.197455    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:09.197466    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:09.213005    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:09.213022    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:09.249209    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:09.249224    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:09.261579    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:09.261589    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:09.282247    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:09.282265    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:09.296881    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:09.296893    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:09.310492    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:09.310501    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:09.354564    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:09.354577    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:09.399373    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:09.399386    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:09.414932    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:09.414945    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:09.427237    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:09.427254    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:09.441645    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:09.441657    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:09.458991    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:09.459002    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:09.479181    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:09.479199    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:09.498045    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:09.498055    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:09.522950    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:09.522962    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:09.537381    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:09.537392    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:09.548500    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:09.548514    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:12.061871    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:17.063992    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:17.064101    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:17.075324    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:17.075406    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:17.086805    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:17.086888    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:17.098465    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:17.098546    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:17.109879    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:17.109953    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:17.121153    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:17.121237    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:17.131976    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:17.132065    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:17.143267    9658 logs.go:282] 0 containers: []
	W1209 03:36:17.143279    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:17.143354    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:17.155390    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:17.155407    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:17.155414    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:17.177682    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:17.177697    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:17.216318    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:17.216332    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:17.228541    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:17.228554    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:17.247209    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:17.247228    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:17.259995    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:17.260007    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:17.264601    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:17.264609    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:17.277547    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:17.277558    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:17.292659    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:17.292676    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:17.304985    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:17.304997    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:17.322921    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:17.322941    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:17.335893    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:17.335905    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:17.362344    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:17.362356    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:17.375612    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:17.375625    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:17.416524    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:17.416538    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:17.430892    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:17.430903    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:17.442763    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:17.442773    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:17.454250    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:17.454265    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:17.493569    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:17.493577    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:20.009121    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:25.009382    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:25.009477    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:25.021222    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:25.021310    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:25.032968    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:25.033057    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:25.043977    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:25.044067    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:25.055310    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:25.055397    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:25.067669    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:25.067757    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:25.078899    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:25.078986    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:25.090080    9658 logs.go:282] 0 containers: []
	W1209 03:36:25.090094    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:25.090168    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:25.105964    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:25.106001    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:25.106006    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:25.118799    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:25.118807    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:25.145916    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:25.145928    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:25.161634    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:25.161646    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:25.180873    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:25.180886    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:25.207913    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:25.207925    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:25.219628    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:25.219640    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:25.261300    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:25.261309    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:25.266440    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:25.266451    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:25.303821    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:25.303836    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:25.339572    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:25.339583    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:25.357576    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:25.357591    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:25.368489    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:25.368502    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:25.379837    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:25.379848    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:25.393792    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:25.393807    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:25.411167    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:25.411178    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:25.422630    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:25.422644    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:25.436197    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:25.436211    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:25.447292    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:25.447303    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:27.963074    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:32.965379    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:32.965490    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:32.979530    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:32.979619    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:32.991266    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:32.991350    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:33.003707    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:33.003802    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:33.015469    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:33.015554    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:33.027376    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:33.027459    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:33.038607    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:33.038692    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:33.049890    9658 logs.go:282] 0 containers: []
	W1209 03:36:33.049902    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:33.049980    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:33.061875    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:33.061892    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:33.061897    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:33.076524    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:33.076537    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:33.089377    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:33.089388    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:33.104145    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:33.104158    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:33.119542    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:33.119551    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:33.131510    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:33.131522    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:33.149749    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:33.149759    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:33.162839    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:33.162851    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:33.204419    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:33.204436    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:33.209238    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:33.209248    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:33.221512    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:33.221527    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:33.234687    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:33.234700    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:33.270161    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:33.270170    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:33.282672    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:33.282683    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:33.302880    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:33.302891    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:33.321492    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:33.321507    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:33.338680    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:33.338693    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:33.363185    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:33.363198    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:33.403148    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:33.403162    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:35.920275    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:40.922476    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:40.922584    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:40.934328    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:40.934422    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:40.946465    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:40.946545    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:40.958403    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:40.958493    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:40.974068    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:40.974147    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:40.986268    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:40.986353    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:41.001338    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:41.001427    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:41.012401    9658 logs.go:282] 0 containers: []
	W1209 03:36:41.012416    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:41.012494    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:41.027639    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:41.027654    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:41.027662    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:41.048839    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:41.048851    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:41.061758    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:41.061771    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:41.088647    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:41.088658    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:41.108203    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:41.108212    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:41.123980    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:41.123989    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:41.136043    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:41.136055    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:41.150637    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:41.150649    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:41.163352    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:41.163364    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:41.175798    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:41.175811    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:41.220337    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:41.220358    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:41.225996    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:41.226004    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:41.263618    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:41.263635    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:41.275810    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:41.275820    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:41.297736    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:41.297747    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:41.333225    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:41.333241    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:41.347206    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:41.347220    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:41.358571    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:41.358582    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:41.371068    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:41.371081    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:43.894463    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:48.896732    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:48.896861    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:48.909626    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:48.909723    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:48.921217    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:48.921311    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:48.933126    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:48.933208    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:48.949726    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:48.949810    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:48.961374    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:48.961456    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:48.973744    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:48.973830    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:48.985507    9658 logs.go:282] 0 containers: []
	W1209 03:36:48.985520    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:48.985594    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:48.997173    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:48.997190    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:48.997196    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:49.035770    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:49.035784    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:49.071842    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:49.071853    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:49.087318    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:49.087329    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:49.100441    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:49.100452    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:49.113393    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:49.113403    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:49.132762    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:49.132776    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:49.145588    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:49.145602    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:49.158244    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:49.158259    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:49.174486    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:49.174499    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:49.186259    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:49.186270    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:49.227967    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:49.227984    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:49.232913    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:49.232919    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:49.248091    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:49.248103    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:49.267331    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:49.267346    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:49.280428    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:49.280439    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:49.291469    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:49.291479    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:49.302892    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:49.302902    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:49.320229    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:49.320243    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:51.846861    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:56.847722    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:56.847819    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:56.858972    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:56.859055    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:56.870247    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:56.870332    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:56.881557    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:56.881646    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:56.893127    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:56.893210    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:56.904707    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:56.904803    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:56.916407    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:56.916493    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:56.929020    9658 logs.go:282] 0 containers: []
	W1209 03:36:56.929034    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:56.929107    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:56.940502    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:56.940518    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:56.940524    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:56.953674    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:56.953685    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:56.973879    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:56.973888    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:56.992206    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:56.992220    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:57.004715    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:57.004727    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:57.019152    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:57.019167    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:57.060824    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:57.060838    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:57.075910    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:57.075924    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:57.088769    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:57.088782    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:57.109703    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:57.109719    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:57.122305    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:57.122316    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:57.134173    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:57.134185    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:57.159173    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:57.159182    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:57.200113    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:57.200125    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:57.205144    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:57.205153    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:57.218857    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:57.218868    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:57.230266    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:57.230279    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:57.269927    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:57.269939    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:57.284653    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:57.284663    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:59.798893    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:04.800423    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:04.800545    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:04.812501    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:04.812587    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:04.827687    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:04.827773    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:04.839433    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:04.839512    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:04.851440    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:04.851520    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:04.862429    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:04.862509    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:04.873273    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:04.873353    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:04.884350    9658 logs.go:282] 0 containers: []
	W1209 03:37:04.884360    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:04.884432    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:04.896325    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:04.896342    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:04.896348    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:04.914478    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:04.914492    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:04.926662    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:04.926674    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:04.947955    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:04.947968    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:04.967393    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:04.967407    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:04.980958    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:04.980971    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:04.993173    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:04.993187    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:05.018331    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:05.018344    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:05.040972    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:05.040983    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:05.056743    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:05.056752    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:05.069386    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:05.069400    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:05.085753    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:05.085767    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:05.097447    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:05.097459    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:05.138530    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:05.138543    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:05.150528    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:05.150540    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:05.164633    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:05.164645    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:05.197742    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:05.197754    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:05.209292    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:05.209305    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:05.213660    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:05.213669    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:07.755466    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:12.756143    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:12.756244    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:12.767877    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:12.767966    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:12.779392    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:12.779483    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:12.791243    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:12.791334    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:12.803867    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:12.803951    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:12.818073    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:12.818155    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:12.829246    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:12.829329    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:12.870100    9658 logs.go:282] 0 containers: []
	W1209 03:37:12.870114    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:12.870188    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:12.883384    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:12.883402    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:12.883408    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:12.903765    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:12.903777    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:12.922401    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:12.922409    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:12.935083    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:12.935096    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:12.947231    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:12.947241    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:12.971678    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:12.971690    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:12.984624    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:12.984637    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:12.989432    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:12.989440    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:13.027358    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:13.027369    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:13.041720    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:13.041730    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:13.075501    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:13.075516    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:13.091467    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:13.091490    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:13.103481    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:13.103492    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:13.143043    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:13.143055    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:13.156219    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:13.156232    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:13.167599    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:13.167608    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:13.179487    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:13.179498    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:13.191284    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:13.191294    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:13.208769    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:13.208784    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:15.722393    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:20.724737    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:20.724826    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:20.736411    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:20.736500    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:20.747787    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:20.747871    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:20.760309    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:20.760396    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:20.773227    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:20.773305    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:20.784784    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:20.784867    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:20.795973    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:20.796061    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:20.806832    9658 logs.go:282] 0 containers: []
	W1209 03:37:20.806844    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:20.806921    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:20.817824    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:20.817840    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:20.817846    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:20.822751    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:20.822764    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:20.838875    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:20.838889    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:20.855312    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:20.855323    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:20.867971    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:20.867982    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:20.880846    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:20.880858    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:20.904508    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:20.904518    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:20.939691    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:20.939703    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:20.957387    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:20.957398    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:20.999669    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:20.999679    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:21.014310    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:21.014319    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:21.025806    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:21.025817    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:21.044619    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:21.044633    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:21.059496    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:21.059507    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:21.075091    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:21.075102    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:21.110661    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:21.110672    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:21.124792    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:21.124805    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:21.136346    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:21.136357    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:21.155012    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:21.155021    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:23.669300    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:28.671619    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:28.671731    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:28.684887    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:28.684974    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:28.696185    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:28.696267    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:28.707639    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:28.707721    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:28.718629    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:28.718715    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:28.732554    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:28.732638    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:28.744077    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:28.744162    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:28.755556    9658 logs.go:282] 0 containers: []
	W1209 03:37:28.755568    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:28.755639    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:28.767332    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:28.767347    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:28.767352    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:28.779590    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:28.779603    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:28.792771    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:28.792786    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:28.821991    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:28.822003    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:28.840724    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:28.840738    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:28.859483    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:28.859502    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:28.871741    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:28.871755    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:28.906822    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:28.906832    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:28.939272    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:28.939284    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:28.950415    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:28.950425    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:28.961904    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:28.961913    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:28.977781    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:28.977795    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:28.995041    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:28.995050    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:29.036680    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:29.036694    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:29.040955    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:29.040964    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:29.054790    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:29.054803    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:29.069428    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:29.069440    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:29.084448    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:29.084461    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:29.096206    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:29.096218    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:31.620383    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:36.622848    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:36.622944    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:36.634749    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:36.634837    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:36.646576    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:36.646661    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:36.659012    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:36.659102    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:36.670804    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:36.670885    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:36.681667    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:36.681749    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:36.693581    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:36.693658    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:36.704935    9658 logs.go:282] 0 containers: []
	W1209 03:37:36.704946    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:36.705020    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:36.716356    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:36.716373    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:36.716380    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:36.755070    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:36.755082    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:36.771244    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:36.771256    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:36.783943    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:36.783955    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:36.797101    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:36.797113    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:36.817712    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:36.817726    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:36.836979    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:36.836988    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:36.849655    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:36.849667    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:36.891702    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:36.891713    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:36.896164    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:36.896173    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:36.909926    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:36.909937    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:36.921465    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:36.921474    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:36.935330    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:36.935343    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:36.946987    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:36.946998    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:36.958411    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:36.958425    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:36.980545    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:36.980551    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:37.013539    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:37.013551    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:37.031131    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:37.031141    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:37.043198    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:37.043209    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:39.563739    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:44.564901    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:44.565007    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:44.581858    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:44.581938    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:44.599980    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:44.600063    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:44.611800    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:44.611884    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:44.623067    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:44.623150    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:44.634204    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:44.634288    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:44.645578    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:44.645664    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:44.657486    9658 logs.go:282] 0 containers: []
	W1209 03:37:44.657498    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:44.657570    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:44.671117    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:44.671135    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:44.671141    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:44.706562    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:44.706572    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:44.721026    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:44.721038    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:44.744928    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:44.744940    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:44.757945    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:44.757956    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:44.762528    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:44.762539    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:44.777931    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:44.777946    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:44.798384    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:44.798397    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:44.815738    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:44.815749    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:44.827253    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:44.827267    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:44.868519    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:44.868550    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:44.904712    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:44.904723    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:44.919711    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:44.919723    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:44.930634    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:44.930651    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:44.942494    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:44.942504    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:44.953677    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:44.953688    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:44.967049    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:44.967062    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:44.978879    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:44.978893    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:44.990363    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:44.990373    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:47.509353    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:52.509701    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:52.509811    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:52.523938    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:52.524022    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:52.535764    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:52.535855    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:52.548301    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:52.548383    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:52.559996    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:52.560079    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:52.572068    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:52.572146    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:52.583925    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:52.584005    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:52.595576    9658 logs.go:282] 0 containers: []
	W1209 03:37:52.595588    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:52.595662    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:52.606544    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:52.606559    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:52.606565    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:52.621757    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:52.621770    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:52.634567    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:52.634581    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:52.660551    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:52.660564    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:52.672632    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:52.672644    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:52.716309    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:52.716324    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:52.731668    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:52.731682    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:52.743966    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:52.743979    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:52.760866    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:52.760877    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:52.780944    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:52.780958    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:52.800233    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:52.800248    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:52.819369    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:52.819383    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:52.830776    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:52.830787    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:52.865115    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:52.865129    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:52.888169    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:52.888181    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:52.924068    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:52.924081    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:52.942636    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:52.942649    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:52.960635    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:52.960648    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:52.973576    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:52.973588    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:55.478756    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:00.481224    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:00.481332    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:38:00.496021    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:38:00.496101    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:38:00.507054    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:38:00.507133    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:38:00.517922    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:38:00.518006    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:38:00.529288    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:38:00.529362    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:38:00.540954    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:38:00.541044    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:38:00.553700    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:38:00.553790    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:38:00.565268    9658 logs.go:282] 0 containers: []
	W1209 03:38:00.565281    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:38:00.565352    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:38:00.579139    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:38:00.579226    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:38:00.579269    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:38:00.620138    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:38:00.620162    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:38:00.633537    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:38:00.633550    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:38:00.646394    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:38:00.646406    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:38:00.689182    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:38:00.689193    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:38:00.704654    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:38:00.704666    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:38:00.723433    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:38:00.723445    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:38:00.740747    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:38:00.740760    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:38:00.752538    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:38:00.752549    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:38:00.764583    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:38:00.764598    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:38:00.779016    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:38:00.779026    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:38:00.793678    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:38:00.793691    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:38:00.808371    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:38:00.808381    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:38:00.827693    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:38:00.827708    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:38:00.839138    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:38:00.839149    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:38:00.843559    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:38:00.843565    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:38:00.884621    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:38:00.884636    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:38:00.896010    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:38:00.896021    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:38:00.907484    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:38:00.907499    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:38:03.432244    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:08.434496    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:08.434686    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:38:08.447032    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:38:08.447100    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:38:08.462858    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:38:08.462931    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:38:08.474121    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:38:08.474190    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:38:08.490338    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:38:08.490411    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:38:08.501773    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:38:08.501851    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:38:08.513423    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:38:08.513499    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:38:08.524359    9658 logs.go:282] 0 containers: []
	W1209 03:38:08.524371    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:38:08.524433    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:38:08.535833    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:38:08.535846    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:38:08.535850    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:38:08.577875    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:38:08.577889    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:38:08.583016    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:38:08.583028    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:38:08.598508    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:38:08.598521    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:38:08.610914    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:38:08.610928    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:38:08.626065    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:38:08.626081    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:38:08.646732    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:38:08.646749    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:38:08.659258    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:38:08.659269    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:38:08.671294    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:38:08.671307    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:38:08.705373    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:38:08.705385    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:38:08.716430    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:38:08.716444    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:38:08.735539    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:38:08.735549    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:38:08.752899    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:38:08.752914    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:38:08.764324    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:38:08.764337    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:38:08.776991    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:38:08.777007    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:38:08.811526    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:38:08.811566    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:38:08.822897    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:38:08.822908    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:38:08.834537    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:38:08.834549    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:38:08.852123    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:38:08.852134    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:38:11.373893    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:16.376444    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:16.376562    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:38:16.389428    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:38:16.389511    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:38:16.401395    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:38:16.401483    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:38:16.414577    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:38:16.414665    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:38:16.426661    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:38:16.426741    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:38:16.438502    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:38:16.438585    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:38:16.449885    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:38:16.449974    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:38:16.461132    9658 logs.go:282] 0 containers: []
	W1209 03:38:16.461147    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:38:16.461223    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:38:16.472759    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:38:16.472777    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:38:16.472784    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:38:16.511127    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:38:16.511139    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:38:16.526440    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:38:16.526454    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:38:16.539444    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:38:16.539457    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:38:16.563190    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:38:16.563207    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:38:16.578639    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:38:16.578653    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:38:16.591121    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:38:16.591135    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:38:16.605303    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:38:16.605318    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:38:16.618489    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:38:16.618500    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:38:16.632639    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:38:16.632652    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:38:16.674646    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:38:16.674666    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:38:16.698418    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:38:16.698433    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:38:16.718022    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:38:16.718041    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:38:16.736820    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:38:16.736837    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:38:16.750054    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:38:16.750070    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:38:16.762681    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:38:16.762695    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:38:16.767646    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:38:16.767658    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:38:16.804069    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:38:16.804091    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:38:16.816006    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:38:16.816020    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:38:19.336501    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:24.338603    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:24.338731    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:38:24.350428    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:38:24.350521    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:38:24.362357    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:38:24.362421    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:38:24.374168    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:38:24.374251    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:38:24.386174    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:38:24.386257    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:38:24.397553    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:38:24.397634    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:38:24.408959    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:38:24.409048    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:38:24.420379    9658 logs.go:282] 0 containers: []
	W1209 03:38:24.420390    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:38:24.420461    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:38:24.431521    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:38:24.431538    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:38:24.431544    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:38:24.467681    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:38:24.467700    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:38:24.483794    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:38:24.483813    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:38:24.498537    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:38:24.498558    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:38:24.518635    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:38:24.518649    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:38:24.537352    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:38:24.537365    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:38:24.578202    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:38:24.578223    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:38:24.590598    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:38:24.590610    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:38:24.606950    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:38:24.606960    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:38:24.625229    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:38:24.625242    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:38:24.638677    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:38:24.638688    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:38:24.643556    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:38:24.643567    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:38:24.662751    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:38:24.662768    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:38:24.675856    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:38:24.675870    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:38:24.689199    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:38:24.689208    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:38:24.705886    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:38:24.705896    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:38:24.745151    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:38:24.745165    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:38:24.756751    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:38:24.756765    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:38:24.778260    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:38:24.778274    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:38:27.296357    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:32.297853    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:32.297947    9658 kubeadm.go:597] duration metric: took 4m5.165122208s to restartPrimaryControlPlane
	W1209 03:38:32.298006    9658 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 03:38:32.298033    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1209 03:38:33.330595    9658 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.032562584s)
	I1209 03:38:33.330680    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 03:38:33.335882    9658 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 03:38:33.338874    9658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 03:38:33.341916    9658 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 03:38:33.341922    9658 kubeadm.go:157] found existing configuration files:
	
	I1209 03:38:33.341950    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/admin.conf
	I1209 03:38:33.344478    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 03:38:33.344512    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 03:38:33.347121    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/kubelet.conf
	I1209 03:38:33.350238    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 03:38:33.350267    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 03:38:33.353556    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/controller-manager.conf
	I1209 03:38:33.355949    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 03:38:33.355978    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 03:38:33.358844    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/scheduler.conf
	I1209 03:38:33.361993    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 03:38:33.362023    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 03:38:33.364812    9658 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 03:38:33.382754    9658 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1209 03:38:33.382833    9658 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 03:38:33.431291    9658 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 03:38:33.431356    9658 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 03:38:33.431404    9658 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 03:38:33.480282    9658 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 03:38:33.484484    9658 out.go:235]   - Generating certificates and keys ...
	I1209 03:38:33.484522    9658 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 03:38:33.484555    9658 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 03:38:33.484599    9658 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 03:38:33.484633    9658 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 03:38:33.484670    9658 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 03:38:33.484713    9658 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 03:38:33.484756    9658 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 03:38:33.484792    9658 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 03:38:33.484839    9658 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 03:38:33.484877    9658 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 03:38:33.484901    9658 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 03:38:33.484933    9658 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 03:38:33.536458    9658 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 03:38:33.620259    9658 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 03:38:33.701450    9658 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 03:38:33.813437    9658 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 03:38:33.848371    9658 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 03:38:33.848701    9658 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 03:38:33.848730    9658 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 03:38:33.933906    9658 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 03:38:33.938635    9658 out.go:235]   - Booting up control plane ...
	I1209 03:38:33.938688    9658 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 03:38:33.938734    9658 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 03:38:33.938772    9658 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 03:38:33.938819    9658 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 03:38:33.938946    9658 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 03:38:38.436275    9658 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502561 seconds
	I1209 03:38:38.436354    9658 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 03:38:38.441325    9658 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 03:38:38.949458    9658 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 03:38:38.949593    9658 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-765000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 03:38:39.454961    9658 kubeadm.go:310] [bootstrap-token] Using token: jfi1wa.uirtef2mabjp664a
	I1209 03:38:39.461702    9658 out.go:235]   - Configuring RBAC rules ...
	I1209 03:38:39.461774    9658 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 03:38:39.461822    9658 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 03:38:39.470774    9658 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 03:38:39.471632    9658 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 03:38:39.472878    9658 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 03:38:39.474917    9658 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 03:38:39.478614    9658 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 03:38:39.684827    9658 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 03:38:39.860285    9658 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 03:38:39.860824    9658 kubeadm.go:310] 
	I1209 03:38:39.860852    9658 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 03:38:39.860855    9658 kubeadm.go:310] 
	I1209 03:38:39.860892    9658 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 03:38:39.860896    9658 kubeadm.go:310] 
	I1209 03:38:39.860907    9658 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 03:38:39.860946    9658 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 03:38:39.860984    9658 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 03:38:39.860986    9658 kubeadm.go:310] 
	I1209 03:38:39.861012    9658 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 03:38:39.861017    9658 kubeadm.go:310] 
	I1209 03:38:39.861044    9658 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 03:38:39.861048    9658 kubeadm.go:310] 
	I1209 03:38:39.861072    9658 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 03:38:39.861110    9658 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 03:38:39.861149    9658 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 03:38:39.861155    9658 kubeadm.go:310] 
	I1209 03:38:39.861218    9658 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 03:38:39.861262    9658 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 03:38:39.861269    9658 kubeadm.go:310] 
	I1209 03:38:39.861326    9658 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jfi1wa.uirtef2mabjp664a \
	I1209 03:38:39.861377    9658 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a782ae2dfd662275a1e7aa9644899b689ef07f45552176e7bc27057154b9dd4 \
	I1209 03:38:39.861388    9658 kubeadm.go:310] 	--control-plane 
	I1209 03:38:39.861392    9658 kubeadm.go:310] 
	I1209 03:38:39.861436    9658 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 03:38:39.861440    9658 kubeadm.go:310] 
	I1209 03:38:39.861480    9658 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jfi1wa.uirtef2mabjp664a \
	I1209 03:38:39.861531    9658 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a782ae2dfd662275a1e7aa9644899b689ef07f45552176e7bc27057154b9dd4 
	I1209 03:38:39.861793    9658 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 03:38:39.861831    9658 cni.go:84] Creating CNI manager for ""
	I1209 03:38:39.861843    9658 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:38:39.867709    9658 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 03:38:39.877715    9658 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 03:38:39.880820    9658 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 03:38:39.886348    9658 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 03:38:39.886403    9658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 03:38:39.886422    9658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-765000 minikube.k8s.io/updated_at=2024_12_09T03_38_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=running-upgrade-765000 minikube.k8s.io/primary=true
	I1209 03:38:39.931204    9658 ops.go:34] apiserver oom_adj: -16
	I1209 03:38:39.931203    9658 kubeadm.go:1113] duration metric: took 44.848042ms to wait for elevateKubeSystemPrivileges
	I1209 03:38:39.931298    9658 kubeadm.go:394] duration metric: took 4m12.812350083s to StartCluster
	I1209 03:38:39.931314    9658 settings.go:142] acquiring lock: {Name:mk9d239bb773df077cf7eb12290ff1e68f296c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:38:39.931389    9658 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:38:39.931817    9658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/kubeconfig: {Name:mkcab6edf2c02dd56919c96ee93c72d0b668d23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:38:39.932031    9658 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:38:39.932096    9658 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 03:38:39.932132    9658 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-765000"
	I1209 03:38:39.932141    9658 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-765000"
	I1209 03:38:39.932152    9658 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-765000"
	I1209 03:38:39.932143    9658 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-765000"
	W1209 03:38:39.932164    9658 addons.go:243] addon storage-provisioner should already be in state true
	I1209 03:38:39.932175    9658 host.go:66] Checking if "running-upgrade-765000" exists ...
	I1209 03:38:39.932221    9658 config.go:182] Loaded profile config "running-upgrade-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 03:38:39.933172    9658 kapi.go:59] client config for running-upgrade-765000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/client.key", CAFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10431f740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 03:38:39.933297    9658 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-765000"
	W1209 03:38:39.933302    9658 addons.go:243] addon default-storageclass should already be in state true
	I1209 03:38:39.933309    9658 host.go:66] Checking if "running-upgrade-765000" exists ...
	I1209 03:38:39.936745    9658 out.go:177] * Verifying Kubernetes components...
	I1209 03:38:39.937085    9658 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 03:38:39.939814    9658 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 03:38:39.939821    9658 sshutil.go:53] new ssh client: &{IP:localhost Port:60526 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/running-upgrade-765000/id_rsa Username:docker}
	I1209 03:38:39.942687    9658 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:38:39.946732    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:38:39.950738    9658 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 03:38:39.950744    9658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 03:38:39.950750    9658 sshutil.go:53] new ssh client: &{IP:localhost Port:60526 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/running-upgrade-765000/id_rsa Username:docker}
	I1209 03:38:40.043255    9658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:38:40.048742    9658 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:38:40.048800    9658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:38:40.052912    9658 api_server.go:72] duration metric: took 120.872167ms to wait for apiserver process to appear ...
	I1209 03:38:40.052920    9658 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:38:40.052927    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:40.112325    9658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 03:38:40.127019    9658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 03:38:40.448773    9658 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 03:38:40.448785    9658 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 03:38:45.054629    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:45.054694    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:50.054886    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:50.054921    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:55.055042    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:55.055065    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:00.055322    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:00.055349    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:05.055609    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:05.055632    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:10.056091    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:10.056118    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1209 03:39:10.450589    9658 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1209 03:39:10.454868    9658 out.go:177] * Enabled addons: storage-provisioner
	I1209 03:39:10.461762    9658 addons.go:510] duration metric: took 30.530246667s for enable addons: enabled=[storage-provisioner]
	I1209 03:39:15.056719    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:15.056766    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:20.057625    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:20.057670    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:25.058669    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:25.058723    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:30.060073    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:30.060130    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:35.061956    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:35.062023    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:40.064183    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:40.064327    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:39:40.079761    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:39:40.079843    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:39:40.097562    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:39:40.097644    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:39:40.111777    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:39:40.111861    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:39:40.133474    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:39:40.133566    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:39:40.149576    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:39:40.149662    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:39:40.161112    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:39:40.161193    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:39:40.171378    9658 logs.go:282] 0 containers: []
	W1209 03:39:40.171390    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:39:40.171460    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:39:40.181575    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:39:40.181592    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:39:40.181598    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:39:40.186337    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:39:40.186344    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:39:40.200626    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:39:40.200638    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:39:40.221425    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:39:40.221442    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:39:40.232782    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:39:40.232792    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:39:40.244757    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:39:40.244768    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:39:40.269673    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:39:40.269684    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:39:40.303478    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:39:40.303487    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:39:40.339592    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:39:40.339605    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:39:40.355086    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:39:40.355097    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:39:40.369841    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:39:40.369853    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:39:40.382022    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:39:40.382033    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:39:40.393909    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:39:40.393921    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:39:42.907509    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:47.909591    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:47.909849    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:39:47.935830    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:39:47.935941    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:39:47.950638    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:39:47.950745    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:39:47.967070    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:39:47.967156    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:39:47.979412    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:39:47.979505    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:39:47.989785    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:39:47.989871    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:39:48.000730    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:39:48.000812    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:39:48.011089    9658 logs.go:282] 0 containers: []
	W1209 03:39:48.011104    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:39:48.011173    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:39:48.022572    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:39:48.022589    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:39:48.022595    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:39:48.037429    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:39:48.037442    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:39:48.049994    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:39:48.050008    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:39:48.061537    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:39:48.061550    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:39:48.076895    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:39:48.076908    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:39:48.088751    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:39:48.088761    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:39:48.093890    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:39:48.093900    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:39:48.133178    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:39:48.133189    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:39:48.148060    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:39:48.148070    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:39:48.165660    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:39:48.165670    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:39:48.185455    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:39:48.185466    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:39:48.209111    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:39:48.209122    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:39:48.221345    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:39:48.221354    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:39:50.758151    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:55.759202    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:55.759389    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:39:55.778100    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:39:55.778217    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:39:55.791657    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:39:55.791757    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:39:55.806177    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:39:55.806259    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:39:55.816606    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:39:55.816679    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:39:55.826970    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:39:55.827046    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:39:55.837793    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:39:55.837869    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:39:55.847932    9658 logs.go:282] 0 containers: []
	W1209 03:39:55.847944    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:39:55.848014    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:39:55.858098    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:39:55.858114    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:39:55.858120    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:39:55.869293    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:39:55.869308    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:39:55.892459    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:39:55.892467    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:39:55.903930    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:39:55.903940    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:39:55.908310    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:39:55.908317    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:39:55.922791    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:39:55.922804    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:39:55.934490    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:39:55.934501    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:39:55.945702    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:39:55.945712    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:39:55.960146    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:39:55.960156    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:39:55.977892    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:39:55.977903    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:39:55.989080    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:39:55.989090    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:39:56.023822    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:39:56.023836    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:39:56.059934    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:39:56.059945    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:39:58.575471    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:03.577760    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:03.578073    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:03.595968    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:03.596072    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:03.610214    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:03.610306    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:03.621994    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:03.622068    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:03.632038    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:03.632126    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:03.642996    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:03.643072    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:03.653712    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:03.653787    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:03.664033    9658 logs.go:282] 0 containers: []
	W1209 03:40:03.664044    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:03.664109    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:03.674585    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:03.674603    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:03.674608    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:03.710046    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:03.710057    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:03.721915    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:03.721929    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:03.736138    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:03.736148    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:03.748930    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:03.748942    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:03.771685    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:03.771698    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:03.795023    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:03.795033    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:03.829274    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:03.829288    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:03.833749    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:03.833758    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:03.848635    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:03.848645    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:03.862895    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:03.862908    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:03.876324    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:03.876335    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:03.888199    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:03.888212    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:06.403210    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:11.405439    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:11.405682    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:11.432030    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:11.432147    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:11.449220    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:11.449312    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:11.462665    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:11.462750    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:11.473740    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:11.473824    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:11.484912    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:11.484997    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:11.495335    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:11.495413    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:11.505525    9658 logs.go:282] 0 containers: []
	W1209 03:40:11.505537    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:11.505607    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:11.515971    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:11.515986    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:11.515991    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:11.530396    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:11.530406    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:11.544353    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:11.544364    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:11.556109    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:11.556119    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:11.570652    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:11.570662    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:11.588461    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:11.588474    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:11.613538    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:11.613548    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:11.624921    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:11.624932    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:11.659170    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:11.659182    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:11.664316    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:11.664324    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:11.676245    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:11.676258    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:11.691096    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:11.691108    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:11.702247    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:11.702257    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:14.239111    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:19.241234    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:19.241413    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:19.259234    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:19.259317    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:19.271474    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:19.271554    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:19.282413    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:19.282495    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:19.293213    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:19.293297    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:19.303700    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:19.303802    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:19.314352    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:19.314424    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:19.324363    9658 logs.go:282] 0 containers: []
	W1209 03:40:19.324375    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:19.324437    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:19.335649    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:19.335667    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:19.335673    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:19.372224    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:19.372235    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:19.376877    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:19.376884    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:19.451600    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:19.451616    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:19.466355    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:19.466365    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:19.485021    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:19.485035    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:19.496658    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:19.496670    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:19.516900    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:19.516915    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:19.530706    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:19.530718    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:19.544845    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:19.544855    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:19.556378    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:19.556389    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:19.569763    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:19.569772    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:19.594637    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:19.594649    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:22.108658    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:27.110790    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:27.111011    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:27.126479    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:27.126583    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:27.138768    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:27.138844    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:27.149751    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:27.149835    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:27.159975    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:27.160059    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:27.170593    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:27.170666    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:27.180786    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:27.180855    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:27.191455    9658 logs.go:282] 0 containers: []
	W1209 03:40:27.191467    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:27.191528    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:27.201672    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:27.201689    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:27.201695    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:27.234860    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:27.234869    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:27.239669    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:27.239676    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:27.253790    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:27.253801    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:27.269106    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:27.269117    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:27.280859    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:27.280871    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:27.305283    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:27.305294    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:27.341758    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:27.341772    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:27.355928    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:27.355939    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:27.367462    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:27.367477    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:27.379134    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:27.379145    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:27.393525    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:27.393535    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:27.411154    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:27.411164    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:29.925267    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:34.927539    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:34.927793    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:34.950806    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:34.950932    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:34.966763    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:34.966844    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:34.979412    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:34.979486    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:34.990680    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:34.990760    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:35.001025    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:35.001107    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:35.011695    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:35.011774    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:35.021679    9658 logs.go:282] 0 containers: []
	W1209 03:40:35.021691    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:35.021760    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:35.032051    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:35.032067    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:35.032073    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:35.065307    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:35.065316    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:35.102982    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:35.102993    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:35.122896    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:35.122908    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:35.135000    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:35.135011    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:35.147087    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:35.147097    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:35.173120    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:35.173128    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:35.186106    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:35.186115    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:35.190993    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:35.191000    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:35.204309    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:35.204320    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:35.215581    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:35.215596    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:35.230741    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:35.230752    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:35.248040    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:35.248050    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:37.761793    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:42.763967    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:42.764163    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:42.778043    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:42.778129    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:42.789854    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:42.789931    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:42.800571    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:42.800638    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:42.810785    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:42.810854    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:42.821189    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:42.821270    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:42.831608    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:42.831675    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:42.841672    9658 logs.go:282] 0 containers: []
	W1209 03:40:42.841682    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:42.841740    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:42.851621    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:42.851636    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:42.851642    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:42.885348    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:42.885359    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:42.890407    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:42.890416    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:42.907148    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:42.907158    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:42.923308    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:42.923319    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:42.940079    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:42.940089    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:42.963162    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:42.963170    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:42.976719    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:42.976731    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:43.014872    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:43.014885    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:43.029053    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:43.029064    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:43.040506    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:43.040520    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:43.055392    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:43.055404    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:43.068450    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:43.068459    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:45.582146    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:50.584013    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:50.584199    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:50.602070    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:50.602182    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:50.615646    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:50.615740    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:50.627144    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:50.627225    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:50.637108    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:50.637185    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:50.647435    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:50.647510    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:50.657778    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:50.657849    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:50.668471    9658 logs.go:282] 0 containers: []
	W1209 03:40:50.668488    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:50.668559    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:50.679584    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:50.679600    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:50.679606    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:50.690768    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:50.690779    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:50.695282    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:50.695289    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:50.731490    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:50.731503    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:50.745637    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:50.745647    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:50.758198    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:50.758210    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:50.773965    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:50.773978    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:50.791251    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:50.791261    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:50.820387    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:50.820409    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:50.870856    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:50.870875    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:50.895065    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:50.895082    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:50.917270    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:50.917283    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:50.940989    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:50.941002    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:53.455291    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:58.456417    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:58.456640    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:58.476916    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:58.477022    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:58.491602    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:58.491690    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:58.504044    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:58.504128    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:58.515261    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:58.515343    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:58.525445    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:58.525529    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:58.540621    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:58.540695    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:58.554149    9658 logs.go:282] 0 containers: []
	W1209 03:40:58.554162    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:58.554231    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:58.567498    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:58.567515    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:58.567522    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:58.584548    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:40:58.584560    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:40:58.596341    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:58.596354    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:58.608159    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:40:58.608171    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:40:58.619438    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:58.619450    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:58.645141    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:58.645152    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:58.657221    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:58.657232    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:58.669566    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:58.669577    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:58.704914    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:58.704925    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:58.709793    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:58.709801    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:58.744345    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:58.744359    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:58.758431    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:58.758442    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:58.770062    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:58.770075    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:58.787827    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:58.787838    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:58.799337    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:58.799350    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:01.318630    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:06.321013    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:06.321214    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:06.340618    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:06.340730    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:06.355130    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:06.355221    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:06.367710    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:06.367803    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:06.378906    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:06.378987    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:06.395431    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:06.395513    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:06.411012    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:06.411092    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:06.421406    9658 logs.go:282] 0 containers: []
	W1209 03:41:06.421422    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:06.421488    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:06.432142    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:06.432161    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:06.432166    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:06.444105    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:06.444115    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:06.479828    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:06.479836    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:06.488070    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:06.488083    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:06.505910    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:06.505920    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:06.524923    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:06.524936    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:06.540024    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:06.540034    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:06.554948    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:06.554961    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:06.567559    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:06.567572    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:06.579595    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:06.579607    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:06.593441    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:06.593453    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:06.605573    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:06.605586    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:06.617279    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:06.617290    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:06.654589    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:06.654600    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:06.666567    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:06.666577    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:09.193343    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:14.195659    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:14.195830    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:14.208281    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:14.208354    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:14.229320    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:14.229412    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:14.240342    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:14.240427    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:14.250852    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:14.250931    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:14.261182    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:14.261267    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:14.271637    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:14.271713    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:14.282201    9658 logs.go:282] 0 containers: []
	W1209 03:41:14.282213    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:14.282281    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:14.292303    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:14.292320    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:14.292325    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:14.332112    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:14.332124    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:14.344202    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:14.344214    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:14.355672    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:14.355683    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:14.373253    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:14.373265    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:14.397755    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:14.397762    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:14.433122    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:14.433134    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:14.444972    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:14.444983    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:14.459684    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:14.459698    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:14.471643    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:14.471654    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:14.476567    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:14.476575    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:14.493016    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:14.493027    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:14.507249    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:14.507264    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:14.522468    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:14.522479    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:14.537834    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:14.537849    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:17.051087    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:22.052033    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:22.052321    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:22.077926    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:22.078049    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:22.095550    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:22.095635    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:22.114750    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:22.114836    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:22.125608    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:22.125691    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:22.136184    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:22.136265    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:22.147305    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:22.147389    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:22.157097    9658 logs.go:282] 0 containers: []
	W1209 03:41:22.157107    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:22.157174    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:22.167682    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:22.167701    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:22.167707    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:22.181817    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:22.181828    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:22.194070    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:22.194081    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:22.205889    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:22.205902    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:22.225769    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:22.225779    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:22.237320    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:22.237331    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:22.270458    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:22.270470    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:22.274961    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:22.274970    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:22.289156    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:22.289166    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:22.313635    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:22.313642    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:22.347695    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:22.347706    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:22.366765    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:22.366776    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:22.379097    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:22.379109    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:22.391155    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:22.391166    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:22.402725    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:22.402737    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:24.922024    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:29.924247    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:29.924463    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:29.942909    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:29.943015    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:29.956858    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:29.956946    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:29.969234    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:29.969319    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:29.979760    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:29.979842    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:29.990728    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:29.990814    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:30.001935    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:30.002017    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:30.016730    9658 logs.go:282] 0 containers: []
	W1209 03:41:30.016741    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:30.016813    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:30.028078    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:30.028101    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:30.028107    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:30.040403    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:30.040417    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:30.055077    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:30.055091    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:30.067357    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:30.067368    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:30.078764    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:30.078779    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:30.119701    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:30.119712    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:30.131753    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:30.131766    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:30.143842    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:30.143854    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:30.148837    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:30.148844    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:30.163445    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:30.163458    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:30.179821    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:30.179832    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:30.191754    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:30.191765    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:30.203253    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:30.203263    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:30.220611    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:30.220620    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:30.245404    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:30.245415    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:32.782429    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:37.784629    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:37.784856    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:37.809184    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:37.809295    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:37.822926    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:37.823008    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:37.835187    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:37.835270    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:37.852555    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:37.852646    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:37.874139    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:37.874226    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:37.885412    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:37.885491    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:37.895786    9658 logs.go:282] 0 containers: []
	W1209 03:41:37.895800    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:37.895862    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:37.906839    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:37.906856    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:37.906862    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:37.921282    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:37.921292    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:37.932490    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:37.932503    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:37.956861    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:37.956872    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:37.961237    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:37.961246    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:37.975698    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:37.975711    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:37.987691    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:37.987702    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:37.999751    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:37.999762    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:38.017871    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:38.017882    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:38.054571    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:38.054588    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:38.067001    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:38.067011    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:38.103153    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:38.103166    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:38.118266    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:38.118280    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:38.133407    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:38.133421    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:38.150684    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:38.150698    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:40.664782    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:45.666962    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:45.667211    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:45.691898    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:45.692040    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:45.707551    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:45.707641    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:45.720577    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:45.720865    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:45.732583    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:45.732670    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:45.742875    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:45.742961    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:45.755813    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:45.755902    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:45.765822    9658 logs.go:282] 0 containers: []
	W1209 03:41:45.765837    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:45.765912    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:45.776169    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:45.776187    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:45.776194    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:45.788416    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:45.788426    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:45.823213    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:45.823222    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:45.842070    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:45.842079    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:45.857023    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:45.857033    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:45.868853    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:45.868864    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:45.908653    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:45.908668    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:45.920579    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:45.920588    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:45.944230    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:45.944238    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:45.948942    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:45.948948    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:45.962930    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:45.962943    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:45.974875    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:45.974886    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:45.992598    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:45.992611    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:46.004481    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:46.004491    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:46.016396    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:46.016407    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:48.529826    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:53.532122    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:53.532323    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:53.550847    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:53.550945    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:53.566747    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:53.566826    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:53.578318    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:53.578398    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:53.593531    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:53.593613    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:53.605966    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:53.606033    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:53.617190    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:53.617268    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:53.627642    9658 logs.go:282] 0 containers: []
	W1209 03:41:53.627654    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:53.627724    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:53.638613    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:53.638629    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:53.638635    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:53.643350    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:53.643357    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:53.682446    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:53.682461    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:53.697365    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:53.697377    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:53.709958    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:53.709970    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:53.725357    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:53.725369    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:53.736989    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:53.736998    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:53.750504    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:53.750516    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:53.769162    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:53.769176    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:53.781044    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:53.781055    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:53.795642    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:53.795652    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:53.807316    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:53.807326    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:53.841375    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:53.841388    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:53.858924    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:53.858937    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:53.883495    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:53.883505    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:56.397829    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:01.400339    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:01.400472    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:42:01.411890    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:42:01.411961    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:42:01.422472    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:42:01.422557    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:42:01.433193    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:42:01.433273    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:42:01.443789    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:42:01.443873    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:42:01.455087    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:42:01.455168    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:42:01.465808    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:42:01.465891    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:42:01.476440    9658 logs.go:282] 0 containers: []
	W1209 03:42:01.476452    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:42:01.476525    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:42:01.499799    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:42:01.499818    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:42:01.499826    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:42:01.514468    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:42:01.514480    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:42:01.526361    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:42:01.526374    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:42:01.537584    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:42:01.537597    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:42:01.564223    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:42:01.564240    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:42:01.600505    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:42:01.600512    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:42:01.605282    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:42:01.605290    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:42:01.617248    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:42:01.617262    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:42:01.629337    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:42:01.629348    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:42:01.643939    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:42:01.643951    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:42:01.655378    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:42:01.655389    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:42:01.673032    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:42:01.673042    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:42:01.710238    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:42:01.710252    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:42:01.725833    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:42:01.725846    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:42:01.741889    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:42:01.741900    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:42:04.258045    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:09.259300    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:09.259409    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:42:09.270546    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:42:09.270654    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:42:09.280844    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:42:09.280920    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:42:09.292469    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:42:09.292557    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:42:09.303216    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:42:09.303295    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:42:09.313809    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:42:09.313889    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:42:09.324829    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:42:09.324909    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:42:09.335531    9658 logs.go:282] 0 containers: []
	W1209 03:42:09.335544    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:42:09.335617    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:42:09.346372    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:42:09.346387    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:42:09.346392    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:42:09.358506    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:42:09.358517    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:42:09.373476    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:42:09.373486    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:42:09.384899    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:42:09.384908    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:42:09.422487    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:42:09.422498    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:42:09.434143    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:42:09.434153    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:42:09.448263    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:42:09.448273    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:42:09.471445    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:42:09.471453    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:42:09.504569    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:42:09.504576    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:42:09.509086    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:42:09.509095    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:42:09.521268    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:42:09.521279    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:42:09.539107    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:42:09.539117    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:42:09.550562    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:42:09.550572    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:42:09.563933    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:42:09.563943    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:42:09.578380    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:42:09.578392    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:42:12.092170    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:17.094505    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:17.094847    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:42:17.127981    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:42:17.128083    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:42:17.143115    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:42:17.143195    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:42:17.156207    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:42:17.156290    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:42:17.166668    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:42:17.166747    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:42:17.177340    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:42:17.177411    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:42:17.187883    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:42:17.187952    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:42:17.197849    9658 logs.go:282] 0 containers: []
	W1209 03:42:17.197859    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:42:17.197921    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:42:17.208626    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:42:17.208643    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:42:17.208649    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:42:17.242399    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:42:17.242410    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:42:17.276553    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:42:17.276563    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:42:17.288452    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:42:17.288462    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:42:17.309932    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:42:17.309942    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:42:17.321526    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:42:17.321536    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:42:17.345264    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:42:17.345273    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:42:17.360247    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:42:17.360256    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:42:17.374465    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:42:17.374475    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:42:17.386199    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:42:17.386210    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:42:17.400040    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:42:17.400051    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:42:17.412251    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:42:17.412261    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:42:17.424248    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:42:17.424260    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:42:17.436079    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:42:17.436088    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:42:17.440683    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:42:17.440690    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:42:19.953199    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:24.954110    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:24.954298    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:42:24.970654    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:42:24.970760    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:42:24.983927    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:42:24.984020    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:42:24.995377    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:42:24.995461    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:42:25.006178    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:42:25.006259    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:42:25.017231    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:42:25.017316    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:42:25.031897    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:42:25.031974    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:42:25.042017    9658 logs.go:282] 0 containers: []
	W1209 03:42:25.042029    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:42:25.042105    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:42:25.056489    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:42:25.056511    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:42:25.056518    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:42:25.092890    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:42:25.092904    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:42:25.108012    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:42:25.108023    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:42:25.130966    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:42:25.130977    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:42:25.142982    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:42:25.142997    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:42:25.157629    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:42:25.157640    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:42:25.175046    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:42:25.175056    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:42:25.191082    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:42:25.191092    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:42:25.195702    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:42:25.195711    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:42:25.230002    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:42:25.230015    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:42:25.241391    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:42:25.241410    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:42:25.253723    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:42:25.253736    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:42:25.268357    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:42:25.268370    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:42:25.279532    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:42:25.279542    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:42:25.291375    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:42:25.291385    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:42:27.805150    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:32.807390    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:32.807637    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:42:32.828329    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:42:32.828454    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:42:32.843716    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:42:32.843802    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:42:32.857354    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:42:32.857446    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:42:32.868706    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:42:32.868779    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:42:32.880212    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:42:32.880305    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:42:32.891318    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:42:32.891412    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:42:32.902346    9658 logs.go:282] 0 containers: []
	W1209 03:42:32.902357    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:42:32.902423    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:42:32.913232    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:42:32.913254    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:42:32.913259    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:42:32.933724    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:42:32.933741    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:42:32.945510    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:42:32.945521    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:42:32.957549    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:42:32.957563    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:42:32.993612    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:42:32.993625    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:42:33.009178    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:42:33.009193    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:42:33.026850    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:42:33.026860    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:42:33.038195    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:42:33.038207    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:42:33.072729    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:42:33.072738    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:42:33.087091    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:42:33.087102    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:42:33.101101    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:42:33.101115    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:42:33.105936    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:42:33.105943    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:42:33.117706    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:42:33.117718    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:42:33.142676    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:42:33.142687    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:42:33.154800    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:42:33.154814    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:42:35.671522    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:40.673637    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:40.677674    9658 out.go:201] 
	W1209 03:42:40.681724    9658 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1209 03:42:40.681729    9658 out.go:270] * 
	* 
	W1209 03:42:40.682190    9658 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:42:40.696635    9658 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-765000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-12-09 03:42:40.78619 -0800 PST m=+1265.927914668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-765000 -n running-upgrade-765000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-765000 -n running-upgrade-765000: exit status 2 (15.612819958s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-765000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | multinode-263000 stop          | multinode-263000          | jenkins | v1.34.0 | 09 Dec 24 03:31 PST | 09 Dec 24 03:31 PST |
	| start   | -p multinode-263000            | multinode-263000          | jenkins | v1.34.0 | 09 Dec 24 03:31 PST |                     |
	|         | --wait=true -v=8               |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| node    | list -p multinode-263000       | multinode-263000          | jenkins | v1.34.0 | 09 Dec 24 03:31 PST |                     |
	| start   | -p multinode-263000-m01        | multinode-263000-m01      | jenkins | v1.34.0 | 09 Dec 24 03:31 PST |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| start   | -p multinode-263000-m02        | multinode-263000-m02      | jenkins | v1.34.0 | 09 Dec 24 03:31 PST |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| node    | add -p multinode-263000        | multinode-263000          | jenkins | v1.34.0 | 09 Dec 24 03:31 PST |                     |
	| delete  | -p multinode-263000-m02        | multinode-263000-m02      | jenkins | v1.34.0 | 09 Dec 24 03:31 PST | 09 Dec 24 03:31 PST |
	| delete  | -p multinode-263000            | multinode-263000          | jenkins | v1.34.0 | 09 Dec 24 03:31 PST | 09 Dec 24 03:31 PST |
	| start   | -p test-preload-644000         | test-preload-644000       | jenkins | v1.34.0 | 09 Dec 24 03:31 PST |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true  |                           |         |         |                     |                     |
	|         | --preload=false --driver=qemu2 |                           |         |         |                     |                     |
	|         |  --kubernetes-version=v1.24.4  |                           |         |         |                     |                     |
	| delete  | -p test-preload-644000         | test-preload-644000       | jenkins | v1.34.0 | 09 Dec 24 03:32 PST | 09 Dec 24 03:32 PST |
	| start   | -p scheduled-stop-146000       | scheduled-stop-146000     | jenkins | v1.34.0 | 09 Dec 24 03:32 PST |                     |
	|         | --memory=2048 --driver=qemu2   |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-146000       | scheduled-stop-146000     | jenkins | v1.34.0 | 09 Dec 24 03:32 PST | 09 Dec 24 03:32 PST |
	| start   | -p skaffold-754000             | skaffold-754000           | jenkins | v1.34.0 | 09 Dec 24 03:32 PST |                     |
	|         | --memory=2600 --driver=qemu2   |                           |         |         |                     |                     |
	| delete  | -p skaffold-754000             | skaffold-754000           | jenkins | v1.34.0 | 09 Dec 24 03:32 PST | 09 Dec 24 03:32 PST |
	| start   | -p offline-docker-476000       | offline-docker-476000     | jenkins | v1.34.0 | 09 Dec 24 03:32 PST |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --memory=2048 --wait=true      |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-504000   | kubernetes-upgrade-504000 | jenkins | v1.34.0 | 09 Dec 24 03:32 PST |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| delete  | -p offline-docker-476000       | offline-docker-476000     | jenkins | v1.34.0 | 09 Dec 24 03:32 PST | 09 Dec 24 03:32 PST |
	| start   | -p stopped-upgrade-416000      | minikube                  | jenkins | v1.26.0 | 09 Dec 24 03:32 PST | 09 Dec 24 03:33 PST |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-504000   | kubernetes-upgrade-504000 | jenkins | v1.34.0 | 09 Dec 24 03:32 PST | 09 Dec 24 03:32 PST |
	| start   | -p kubernetes-upgrade-504000   | kubernetes-upgrade-504000 | jenkins | v1.34.0 | 09 Dec 24 03:32 PST |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-504000   | kubernetes-upgrade-504000 | jenkins | v1.34.0 | 09 Dec 24 03:32 PST | 09 Dec 24 03:32 PST |
	| start   | -p running-upgrade-765000      | minikube                  | jenkins | v1.26.0 | 09 Dec 24 03:32 PST | 09 Dec 24 03:33 PST |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-416000 stop    | minikube                  | jenkins | v1.26.0 | 09 Dec 24 03:33 PST | 09 Dec 24 03:33 PST |
	| start   | -p stopped-upgrade-416000      | stopped-upgrade-416000    | jenkins | v1.34.0 | 09 Dec 24 03:33 PST |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| start   | -p running-upgrade-765000      | running-upgrade-765000    | jenkins | v1.34.0 | 09 Dec 24 03:33 PST |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 03:33:50
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 03:33:50.714908    9658 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:33:50.715086    9658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:33:50.715093    9658 out.go:358] Setting ErrFile to fd 2...
	I1209 03:33:50.715096    9658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:33:50.715242    9658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:33:50.716342    9658 out.go:352] Setting JSON to false
	I1209 03:33:50.735212    9658 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5601,"bootTime":1733738429,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:33:50.735299    9658 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:33:50.739376    9658 out.go:177] * [running-upgrade-765000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:33:50.747323    9658 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:33:50.747441    9658 notify.go:220] Checking for updates...
	I1209 03:33:50.753336    9658 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:33:50.757355    9658 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:33:50.760382    9658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:33:50.764343    9658 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:33:50.767353    9658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:33:50.771715    9658 config.go:182] Loaded profile config "running-upgrade-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 03:33:50.774333    9658 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 03:33:50.777362    9658 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:33:50.781383    9658 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:33:50.788350    9658 start.go:297] selected driver: qemu2
	I1209 03:33:50.788355    9658 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60625 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 03:33:50.788396    9658 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:33:50.790970    9658 cni.go:84] Creating CNI manager for ""
	I1209 03:33:50.791006    9658 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:33:50.791042    9658 start.go:340] cluster config:
	{Name:running-upgrade-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60625 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 03:33:50.791097    9658 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:33:50.799340    9658 out.go:177] * Starting "running-upgrade-765000" primary control-plane node in "running-upgrade-765000" cluster
	I1209 03:33:50.803387    9658 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1209 03:33:50.803407    9658 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1209 03:33:50.803415    9658 cache.go:56] Caching tarball of preloaded images
	I1209 03:33:50.803471    9658 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:33:50.803476    9658 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1209 03:33:50.803532    9658 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/config.json ...
	I1209 03:33:50.803928    9658 start.go:360] acquireMachinesLock for running-upgrade-765000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:34:02.794909    9647 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/config.json ...
	I1209 03:34:02.795168    9647 machine.go:93] provisionDockerMachine start ...
	I1209 03:34:02.795240    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:02.795403    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:02.795407    9647 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 03:34:03.923046    9658 start.go:364] duration metric: took 13.11935275s to acquireMachinesLock for "running-upgrade-765000"
	I1209 03:34:03.923066    9658 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:34:03.923074    9658 fix.go:54] fixHost starting: 
	I1209 03:34:03.923741    9658 fix.go:112] recreateIfNeeded on running-upgrade-765000: state=Running err=<nil>
	W1209 03:34:03.923754    9658 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:34:03.928323    9658 out.go:177] * Updating the running qemu2 "running-upgrade-765000" VM ...
	I1209 03:34:03.936145    9658 machine.go:93] provisionDockerMachine start ...
	I1209 03:34:03.936218    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:03.936328    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:03.936332    9658 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 03:34:04.005130    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-765000
	
	I1209 03:34:04.005145    9658 buildroot.go:166] provisioning hostname "running-upgrade-765000"
	I1209 03:34:04.005205    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:04.005323    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:04.005331    9658 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-765000 && echo "running-upgrade-765000" | sudo tee /etc/hostname
	I1209 03:34:04.077834    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-765000
	
	I1209 03:34:04.077906    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:04.078105    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:04.078116    9658 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-765000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-765000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-765000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 03:34:04.145380    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:34:04.145393    9658 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20068-6536/.minikube CaCertPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20068-6536/.minikube}
	I1209 03:34:04.145402    9658 buildroot.go:174] setting up certificates
	I1209 03:34:04.145407    9658 provision.go:84] configureAuth start
	I1209 03:34:04.145411    9658 provision.go:143] copyHostCerts
	I1209 03:34:04.145483    9658 exec_runner.go:144] found /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.pem, removing ...
	I1209 03:34:04.145492    9658 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.pem
	I1209 03:34:04.145946    9658 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.pem (1078 bytes)
	I1209 03:34:04.146176    9658 exec_runner.go:144] found /Users/jenkins/minikube-integration/20068-6536/.minikube/cert.pem, removing ...
	I1209 03:34:04.146180    9658 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20068-6536/.minikube/cert.pem
	I1209 03:34:04.146228    9658 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20068-6536/.minikube/cert.pem (1123 bytes)
	I1209 03:34:04.146366    9658 exec_runner.go:144] found /Users/jenkins/minikube-integration/20068-6536/.minikube/key.pem, removing ...
	I1209 03:34:04.146369    9658 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20068-6536/.minikube/key.pem
	I1209 03:34:04.146416    9658 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20068-6536/.minikube/key.pem (1675 bytes)
	I1209 03:34:04.146525    9658 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-765000 san=[127.0.0.1 localhost minikube running-upgrade-765000]
	I1209 03:34:04.206787    9658 provision.go:177] copyRemoteCerts
	I1209 03:34:04.206839    9658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 03:34:04.206848    9658 sshutil.go:53] new ssh client: &{IP:localhost Port:60526 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/running-upgrade-765000/id_rsa Username:docker}
	I1209 03:34:04.244513    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 03:34:04.251909    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 03:34:04.258628    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 03:34:04.265891    9658 provision.go:87] duration metric: took 120.48125ms to configureAuth
	I1209 03:34:04.265900    9658 buildroot.go:189] setting minikube options for container-runtime
	I1209 03:34:04.266013    9658 config.go:182] Loaded profile config "running-upgrade-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 03:34:04.266070    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:04.266167    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:04.266171    9658 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1209 03:34:04.334558    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1209 03:34:04.334567    9658 buildroot.go:70] root file system type: tmpfs
	I1209 03:34:04.334623    9658 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1209 03:34:04.334685    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:04.334801    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:04.334834    9658 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1209 03:34:04.409065    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1209 03:34:04.409144    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:04.409261    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:04.409270    9658 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1209 03:34:04.480001    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:34:04.480013    9658 machine.go:96] duration metric: took 543.871958ms to provisionDockerMachine
	I1209 03:34:04.480020    9658 start.go:293] postStartSetup for "running-upgrade-765000" (driver="qemu2")
	I1209 03:34:04.480026    9658 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 03:34:04.480103    9658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 03:34:04.480112    9658 sshutil.go:53] new ssh client: &{IP:localhost Port:60526 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/running-upgrade-765000/id_rsa Username:docker}
	I1209 03:34:04.516161    9658 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 03:34:04.517459    9658 info.go:137] Remote host: Buildroot 2021.02.12
	I1209 03:34:04.517468    9658 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20068-6536/.minikube/addons for local assets ...
	I1209 03:34:04.517544    9658 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20068-6536/.minikube/files for local assets ...
	I1209 03:34:04.517629    9658 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem -> 78202.pem in /etc/ssl/certs
	I1209 03:34:04.517725    9658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 03:34:04.520320    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem --> /etc/ssl/certs/78202.pem (1708 bytes)
	I1209 03:34:04.528027    9658 start.go:296] duration metric: took 48.002458ms for postStartSetup
	I1209 03:34:04.528042    9658 fix.go:56] duration metric: took 604.983958ms for fixHost
	I1209 03:34:04.528089    9658 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:04.528190    9658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c2fc0] 0x1028c5800 <nil>  [] 0s} localhost 60526 <nil> <nil>}
	I1209 03:34:04.528196    9658 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 03:34:04.597091    9658 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733744044.426322980
	
	I1209 03:34:04.597099    9658 fix.go:216] guest clock: 1733744044.426322980
	I1209 03:34:04.597105    9658 fix.go:229] Guest: 2024-12-09 03:34:04.42632298 -0800 PST Remote: 2024-12-09 03:34:04.528044 -0800 PST m=+13.839284959 (delta=-101.72102ms)
	I1209 03:34:04.597116    9658 fix.go:200] guest clock delta is within tolerance: -101.72102ms
	I1209 03:34:04.597121    9658 start.go:83] releasing machines lock for "running-upgrade-765000", held for 674.078292ms
	I1209 03:34:04.597197    9658 ssh_runner.go:195] Run: cat /version.json
	I1209 03:34:04.597206    9658 sshutil.go:53] new ssh client: &{IP:localhost Port:60526 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/running-upgrade-765000/id_rsa Username:docker}
	I1209 03:34:04.597197    9658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 03:34:04.597232    9658 sshutil.go:53] new ssh client: &{IP:localhost Port:60526 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/running-upgrade-765000/id_rsa Username:docker}
	W1209 03:34:04.597736    9658 sshutil.go:64] dial failure (will retry): dial tcp [::1]:60526: connect: connection refused
	I1209 03:34:04.597754    9658 retry.go:31] will retry after 286.111134ms: dial tcp [::1]:60526: connect: connection refused
	W1209 03:34:04.630614    9658 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1209 03:34:04.630661    9658 ssh_runner.go:195] Run: systemctl --version
	I1209 03:34:04.632429    9658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 03:34:04.634038    9658 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 03:34:04.634070    9658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1209 03:34:04.637276    9658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1209 03:34:04.641949    9658 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 03:34:04.641956    9658 start.go:495] detecting cgroup driver to use...
	I1209 03:34:04.642030    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 03:34:04.647249    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1209 03:34:04.650000    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 03:34:04.652995    9658 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 03:34:04.653024    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 03:34:04.656395    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 03:34:04.659772    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 03:34:04.663025    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 03:34:04.666022    9658 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 03:34:04.669377    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 03:34:04.672229    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 03:34:04.675733    9658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 03:34:04.679135    9658 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 03:34:04.682127    9658 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 03:34:04.684786    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:04.787847    9658 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 03:34:04.799515    9658 start.go:495] detecting cgroup driver to use...
	I1209 03:34:04.799597    9658 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1209 03:34:04.807272    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 03:34:04.811762    9658 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 03:34:04.818502    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 03:34:04.823541    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 03:34:04.828464    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 03:34:04.834086    9658 ssh_runner.go:195] Run: which cri-dockerd
	I1209 03:34:04.835434    9658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1209 03:34:04.838016    9658 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1209 03:34:04.843140    9658 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1209 03:34:04.945512    9658 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1209 03:34:05.051811    9658 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1209 03:34:05.051867    9658 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1209 03:34:05.058554    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:05.164918    9658 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 03:34:02.866807    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 03:34:02.866854    9647 buildroot.go:166] provisioning hostname "stopped-upgrade-416000"
	I1209 03:34:02.866939    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:02.867050    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:02.867056    9647 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-416000 && echo "stopped-upgrade-416000" | sudo tee /etc/hostname
	I1209 03:34:02.938278    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-416000
	
	I1209 03:34:02.938359    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:02.938482    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:02.938489    9647 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-416000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-416000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-416000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 03:34:03.009417    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:34:03.009430    9647 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20068-6536/.minikube CaCertPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20068-6536/.minikube}
	I1209 03:34:03.009439    9647 buildroot.go:174] setting up certificates
	I1209 03:34:03.009444    9647 provision.go:84] configureAuth start
	I1209 03:34:03.009455    9647 provision.go:143] copyHostCerts
	I1209 03:34:03.009545    9647 exec_runner.go:144] found /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.pem, removing ...
	I1209 03:34:03.009574    9647 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.pem
	I1209 03:34:03.009676    9647 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.pem (1078 bytes)
	I1209 03:34:03.009842    9647 exec_runner.go:144] found /Users/jenkins/minikube-integration/20068-6536/.minikube/cert.pem, removing ...
	I1209 03:34:03.009846    9647 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20068-6536/.minikube/cert.pem
	I1209 03:34:03.009893    9647 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20068-6536/.minikube/cert.pem (1123 bytes)
	I1209 03:34:03.010024    9647 exec_runner.go:144] found /Users/jenkins/minikube-integration/20068-6536/.minikube/key.pem, removing ...
	I1209 03:34:03.010029    9647 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20068-6536/.minikube/key.pem
	I1209 03:34:03.010071    9647 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20068-6536/.minikube/key.pem (1675 bytes)
	I1209 03:34:03.010172    9647 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-416000 san=[127.0.0.1 localhost minikube stopped-upgrade-416000]
	I1209 03:34:03.208189    9647 provision.go:177] copyRemoteCerts
	I1209 03:34:03.208272    9647 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 03:34:03.208281    9647 sshutil.go:53] new ssh client: &{IP:localhost Port:60489 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/id_rsa Username:docker}
	I1209 03:34:03.244794    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 03:34:03.252797    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 03:34:03.261330    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 03:34:03.269384    9647 provision.go:87] duration metric: took 259.930667ms to configureAuth
	I1209 03:34:03.269399    9647 buildroot.go:189] setting minikube options for container-runtime
	I1209 03:34:03.269550    9647 config.go:182] Loaded profile config "stopped-upgrade-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 03:34:03.269607    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:03.269704    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:03.269710    9647 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1209 03:34:03.340084    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1209 03:34:03.340096    9647 buildroot.go:70] root file system type: tmpfs
	I1209 03:34:03.340165    9647 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1209 03:34:03.340248    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:03.340372    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:03.340412    9647 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1209 03:34:03.412405    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1209 03:34:03.412478    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:03.412591    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:03.412601    9647 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1209 03:34:03.799847    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1209 03:34:03.799864    9647 machine.go:96] duration metric: took 1.004709292s to provisionDockerMachine
	I1209 03:34:03.799871    9647 start.go:293] postStartSetup for "stopped-upgrade-416000" (driver="qemu2")
	I1209 03:34:03.799877    9647 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 03:34:03.799951    9647 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 03:34:03.799963    9647 sshutil.go:53] new ssh client: &{IP:localhost Port:60489 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/id_rsa Username:docker}
	I1209 03:34:03.838458    9647 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 03:34:03.840072    9647 info.go:137] Remote host: Buildroot 2021.02.12
	I1209 03:34:03.840083    9647 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20068-6536/.minikube/addons for local assets ...
	I1209 03:34:03.840166    9647 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20068-6536/.minikube/files for local assets ...
	I1209 03:34:03.840265    9647 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem -> 78202.pem in /etc/ssl/certs
	I1209 03:34:03.840377    9647 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 03:34:03.845422    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem --> /etc/ssl/certs/78202.pem (1708 bytes)
	I1209 03:34:03.856053    9647 start.go:296] duration metric: took 56.175583ms for postStartSetup
	I1209 03:34:03.856075    9647 fix.go:56] duration metric: took 20.945300875s for fixHost
	I1209 03:34:03.856134    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:03.856249    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:03.856257    9647 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 03:34:03.922981    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733744044.217951046
	
	I1209 03:34:03.922994    9647 fix.go:216] guest clock: 1733744044.217951046
	I1209 03:34:03.923000    9647 fix.go:229] Guest: 2024-12-09 03:34:04.217951046 -0800 PST Remote: 2024-12-09 03:34:03.856076 -0800 PST m=+21.147266376 (delta=361.875046ms)
	I1209 03:34:03.923012    9647 fix.go:200] guest clock delta is within tolerance: 361.875046ms
	I1209 03:34:03.923014    9647 start.go:83] releasing machines lock for "stopped-upgrade-416000", held for 21.012249875s
	I1209 03:34:03.923093    9647 ssh_runner.go:195] Run: cat /version.json
	I1209 03:34:03.923103    9647 sshutil.go:53] new ssh client: &{IP:localhost Port:60489 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/id_rsa Username:docker}
	I1209 03:34:03.923176    9647 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 03:34:03.924000    9647 sshutil.go:53] new ssh client: &{IP:localhost Port:60489 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/id_rsa Username:docker}
	W1209 03:34:04.003528    9647 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1209 03:34:04.003602    9647 ssh_runner.go:195] Run: systemctl --version
	I1209 03:34:04.006059    9647 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 03:34:04.008086    9647 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 03:34:04.008137    9647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1209 03:34:04.011460    9647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1209 03:34:04.016358    9647 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 03:34:04.016368    9647 start.go:495] detecting cgroup driver to use...
	I1209 03:34:04.016475    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 03:34:04.023456    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1209 03:34:04.026421    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 03:34:04.029333    9647 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 03:34:04.029367    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 03:34:04.032700    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 03:34:04.036352    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 03:34:04.039861    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 03:34:04.043066    9647 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 03:34:04.046011    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 03:34:04.048996    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 03:34:04.052290    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 03:34:04.055672    9647 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 03:34:04.058696    9647 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 03:34:04.061182    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:04.140573    9647 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 03:34:04.152245    9647 start.go:495] detecting cgroup driver to use...
	I1209 03:34:04.152352    9647 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1209 03:34:04.161096    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 03:34:04.165870    9647 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 03:34:04.172000    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 03:34:04.177166    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 03:34:04.182447    9647 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 03:34:04.219053    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 03:34:04.223720    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 03:34:04.229184    9647 ssh_runner.go:195] Run: which cri-dockerd
	I1209 03:34:04.230597    9647 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1209 03:34:04.233197    9647 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1209 03:34:04.238342    9647 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1209 03:34:04.315959    9647 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1209 03:34:04.396893    9647 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1209 03:34:04.396951    9647 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1209 03:34:04.402276    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:04.482870    9647 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 03:34:05.608912    9647 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.126048334s)
	I1209 03:34:05.608988    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1209 03:34:05.614415    9647 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1209 03:34:05.621066    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 03:34:05.626241    9647 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1209 03:34:05.705338    9647 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1209 03:34:05.796042    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:05.882684    9647 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1209 03:34:05.888241    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 03:34:05.892930    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:05.974355    9647 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1209 03:34:06.013161    9647 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1209 03:34:06.013255    9647 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1209 03:34:06.015780    9647 start.go:563] Will wait 60s for crictl version
	I1209 03:34:06.015845    9647 ssh_runner.go:195] Run: which crictl
	I1209 03:34:06.017314    9647 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 03:34:06.032479    9647 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1209 03:34:06.032560    9647 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 03:34:06.049478    9647 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 03:34:06.066839    9647 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1209 03:34:06.066926    9647 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1209 03:34:06.068118    9647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 03:34:06.072392    9647 kubeadm.go:883] updating cluster {Name:stopped-upgrade-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1209 03:34:06.072446    9647 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1209 03:34:06.072500    9647 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 03:34:06.083597    9647 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 03:34:06.083606    9647 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 03:34:06.083668    9647 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 03:34:06.086895    9647 ssh_runner.go:195] Run: which lz4
	I1209 03:34:06.088116    9647 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 03:34:06.089462    9647 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 03:34:06.089476    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1209 03:34:07.001379    9647 docker.go:653] duration metric: took 913.325958ms to copy over tarball
	I1209 03:34:07.001452    9647 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 03:34:08.195759    9647 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.194315084s)
	I1209 03:34:08.195772    9647 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 03:34:08.211396    9647 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 03:34:08.214390    9647 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1209 03:34:08.219128    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:08.303514    9647 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 03:34:09.928134    9647 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.624636042s)
	I1209 03:34:09.928492    9647 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 03:34:09.943672    9647 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 03:34:09.943680    9647 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 03:34:09.943687    9647 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 03:34:09.951896    9647 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:09.954028    9647 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:09.955947    9647 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:09.956002    9647 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:09.958044    9647 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:09.958088    9647 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:09.959654    9647 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:09.959856    9647 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:09.960404    9647 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1209 03:34:09.961609    9647 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:09.962288    9647 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:09.962936    9647 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:09.962953    9647 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1209 03:34:09.963439    9647 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:09.964763    9647 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:09.964772    9647 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:10.430931    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:10.443427    9647 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1209 03:34:10.443825    9647 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:10.443890    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:10.456428    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1209 03:34:10.474630    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:10.486984    9647 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1209 03:34:10.487014    9647 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:10.487111    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:10.499706    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1209 03:34:10.504167    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:10.515763    9647 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1209 03:34:10.515786    9647 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:10.515855    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:10.528292    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1209 03:34:10.594482    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1209 03:34:10.606563    9647 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1209 03:34:10.606598    9647 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1209 03:34:10.606666    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1209 03:34:10.618923    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1209 03:34:10.619053    9647 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1209 03:34:10.620801    9647 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1209 03:34:10.620814    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1209 03:34:10.629397    9647 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1209 03:34:10.629418    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1209 03:34:10.660043    9647 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1209 03:34:10.712597    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:10.722965    9647 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1209 03:34:10.722988    9647 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:10.723054    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:10.732729    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1209 03:34:10.776605    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:10.787165    9647 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1209 03:34:10.787196    9647 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:10.787264    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:10.796981    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W1209 03:34:10.840311    9647 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1209 03:34:10.840638    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:10.850665    9647 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1209 03:34:10.850689    9647 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:10.850754    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:10.860778    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1209 03:34:10.860926    9647 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1209 03:34:10.862606    9647 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1209 03:34:10.862626    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1209 03:34:10.901152    9647 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1209 03:34:10.901169    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1209 03:34:10.936899    9647 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1209 03:34:11.153373    9647 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1209 03:34:11.154522    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:11.175521    9647 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1209 03:34:11.175558    9647 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:11.175658    9647 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:11.195514    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 03:34:11.195677    9647 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 03:34:11.197460    9647 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1209 03:34:11.197471    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1209 03:34:11.230898    9647 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 03:34:11.230912    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1209 03:34:11.464532    9647 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 03:34:11.464572    9647 cache_images.go:92] duration metric: took 1.520906833s to LoadCachedImages
	W1209 03:34:11.464608    9647 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I1209 03:34:11.464616    9647 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1209 03:34:11.464673    9647 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-416000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 03:34:11.464746    9647 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1209 03:34:11.478338    9647 cni.go:84] Creating CNI manager for ""
	I1209 03:34:11.478350    9647 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:34:11.478606    9647 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 03:34:11.478622    9647 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-416000 NodeName:stopped-upgrade-416000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 03:34:11.478697    9647 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-416000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 03:34:11.478765    9647 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1209 03:34:11.481592    9647 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 03:34:11.481630    9647 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 03:34:11.484332    9647 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1209 03:34:11.489629    9647 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 03:34:11.494739    9647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1209 03:34:11.500287    9647 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1209 03:34:11.501640    9647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 03:34:11.505154    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:11.583568    9647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:34:11.591916    9647 certs.go:68] Setting up /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000 for IP: 10.0.2.15
	I1209 03:34:11.591927    9647 certs.go:194] generating shared ca certs ...
	I1209 03:34:11.591937    9647 certs.go:226] acquiring lock for ca certs: {Name:mkab7ef03786804c126b265c91619df81c881ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:11.592354    9647 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.key
	I1209 03:34:11.592581    9647 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/proxy-client-ca.key
	I1209 03:34:11.592600    9647 certs.go:256] generating profile certs ...
	I1209 03:34:11.593262    9647 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/client.key
	I1209 03:34:11.593280    9647 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.key.ff526b50
	I1209 03:34:11.593290    9647 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.crt.ff526b50 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1209 03:34:11.730240    9647 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.crt.ff526b50 ...
	I1209 03:34:11.730257    9647 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.crt.ff526b50: {Name:mk9f53df097e6cd17fb158ce3b910804aa4c0973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:11.730609    9647 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.key.ff526b50 ...
	I1209 03:34:11.730614    9647 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.key.ff526b50: {Name:mk2653b45057ab70adba95a9012e2d47f2c51c06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:11.730792    9647 certs.go:381] copying /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.crt.ff526b50 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.crt
	I1209 03:34:11.730939    9647 certs.go:385] copying /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.key.ff526b50 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.key
	I1209 03:34:11.731301    9647 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/proxy-client.key
	I1209 03:34:11.731513    9647 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/7820.pem (1338 bytes)
	W1209 03:34:11.731747    9647 certs.go:480] ignoring /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/7820_empty.pem, impossibly tiny 0 bytes
	I1209 03:34:11.731759    9647 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 03:34:11.731786    9647 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem (1078 bytes)
	I1209 03:34:11.731807    9647 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem (1123 bytes)
	I1209 03:34:11.731828    9647 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/key.pem (1675 bytes)
	I1209 03:34:11.731874    9647 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem (1708 bytes)
	I1209 03:34:11.734370    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 03:34:11.741287    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 03:34:11.748013    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 03:34:11.755561    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 03:34:11.762185    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 03:34:11.768865    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 03:34:11.775886    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 03:34:11.782985    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 03:34:11.789595    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/7820.pem --> /usr/share/ca-certificates/7820.pem (1338 bytes)
	I1209 03:34:11.796075    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem --> /usr/share/ca-certificates/78202.pem (1708 bytes)
	I1209 03:34:11.803072    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 03:34:11.809548    9647 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 03:34:11.814634    9647 ssh_runner.go:195] Run: openssl version
	I1209 03:34:11.816436    9647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 03:34:11.819370    9647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:34:11.820754    9647 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:34:11.820775    9647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:34:11.822590    9647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 03:34:11.825392    9647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7820.pem && ln -fs /usr/share/ca-certificates/7820.pem /etc/ssl/certs/7820.pem"
	I1209 03:34:11.828718    9647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7820.pem
	I1209 03:34:11.830172    9647 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 11:22 /usr/share/ca-certificates/7820.pem
	I1209 03:34:11.830196    9647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7820.pem
	I1209 03:34:11.831970    9647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7820.pem /etc/ssl/certs/51391683.0"
	I1209 03:34:11.835055    9647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78202.pem && ln -fs /usr/share/ca-certificates/78202.pem /etc/ssl/certs/78202.pem"
	I1209 03:34:11.838111    9647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78202.pem
	I1209 03:34:11.839417    9647 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 11:22 /usr/share/ca-certificates/78202.pem
	I1209 03:34:11.839444    9647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78202.pem
	I1209 03:34:11.841261    9647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78202.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 03:34:11.844542    9647 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 03:34:11.846223    9647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 03:34:11.848182    9647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 03:34:11.850070    9647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 03:34:11.852105    9647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 03:34:11.853984    9647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 03:34:11.855673    9647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 03:34:11.859125    9647 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 03:34:11.859207    9647 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 03:34:11.869081    9647 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 03:34:11.872486    9647 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 03:34:11.872659    9647 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 03:34:11.872689    9647 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 03:34:11.875582    9647 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:34:11.875801    9647 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-416000" does not appear in /Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:34:11.875824    9647 kubeconfig.go:62] /Users/jenkins/minikube-integration/20068-6536/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-416000" cluster setting kubeconfig missing "stopped-upgrade-416000" context setting]
	I1209 03:34:11.875987    9647 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/kubeconfig: {Name:mkcab6edf2c02dd56919c96ee93c72d0b668d23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:11.877633    9647 kapi.go:59] client config for stopped-upgrade-416000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/client.key", CAFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102bcb740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 03:34:11.883332    9647 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 03:34:11.886043    9647 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-416000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1209 03:34:11.886048    9647 kubeadm.go:1160] stopping kube-system containers ...
	I1209 03:34:11.886093    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 03:34:11.896540    9647 docker.go:483] Stopping containers: [a572daa6beda 8e04376e2372 5302a3675333 5b19c97e6b50 8c74a6bfa12f 30c1dd3114a2 e540ad2ee556 31622873173a]
	I1209 03:34:11.896617    9647 ssh_runner.go:195] Run: docker stop a572daa6beda 8e04376e2372 5302a3675333 5b19c97e6b50 8c74a6bfa12f 30c1dd3114a2 e540ad2ee556 31622873173a
	I1209 03:34:11.906994    9647 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 03:34:11.912431    9647 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 03:34:11.915623    9647 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 03:34:11.915628    9647 kubeadm.go:157] found existing configuration files:
	
	I1209 03:34:11.915655    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/admin.conf
	I1209 03:34:11.918135    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 03:34:11.918165    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 03:34:11.920790    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/kubelet.conf
	I1209 03:34:11.923852    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 03:34:11.923887    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 03:34:11.926769    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/controller-manager.conf
	I1209 03:34:11.929146    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 03:34:11.929180    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 03:34:11.932140    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/scheduler.conf
	I1209 03:34:11.934820    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 03:34:11.934850    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 03:34:11.937291    9647 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 03:34:11.940465    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:11.962525    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:12.378476    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:12.512080    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:12.542009    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:12.562588    9647 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:34:12.562698    9647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:34:13.064754    9647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:34:13.564839    9647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:34:13.572476    9647 api_server.go:72] duration metric: took 1.009903792s to wait for apiserver process to appear ...
	I1209 03:34:13.572487    9647 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:34:13.572710    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:21.484796    9658 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.320166583s)
	I1209 03:34:21.484882    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1209 03:34:21.491283    9658 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1209 03:34:21.498798    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 03:34:21.504374    9658 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1209 03:34:21.577634    9658 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1209 03:34:21.638515    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:21.727938    9658 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1209 03:34:21.734838    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 03:34:21.739566    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:21.819646    9658 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1209 03:34:21.862263    9658 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1209 03:34:21.862374    9658 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1209 03:34:21.864493    9658 start.go:563] Will wait 60s for crictl version
	I1209 03:34:21.864562    9658 ssh_runner.go:195] Run: which crictl
	I1209 03:34:21.866295    9658 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 03:34:21.878885    9658 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1209 03:34:21.878962    9658 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 03:34:21.891945    9658 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 03:34:18.575681    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:18.575773    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:21.909032    9658 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1209 03:34:21.909168    9658 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1209 03:34:21.910661    9658 kubeadm.go:883] updating cluster {Name:running-upgrade-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60625 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1209 03:34:21.910701    9658 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1209 03:34:21.910746    9658 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 03:34:21.928820    9658 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 03:34:21.928830    9658 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 03:34:21.928903    9658 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 03:34:21.932265    9658 ssh_runner.go:195] Run: which lz4
	I1209 03:34:21.933949    9658 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 03:34:21.935082    9658 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 03:34:21.935092    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1209 03:34:22.904189    9658 docker.go:653] duration metric: took 970.299125ms to copy over tarball
	I1209 03:34:22.904262    9658 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 03:34:24.031865    9658 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.127608917s)
	I1209 03:34:24.031879    9658 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 03:34:24.047533    9658 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 03:34:24.050374    9658 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1209 03:34:24.055457    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:24.141701    9658 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 03:34:23.576520    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:23.576550    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:25.735901    9658 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.594212875s)
	I1209 03:34:25.736012    9658 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 03:34:25.754507    9658 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 03:34:25.754517    9658 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 03:34:25.754522    9658 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 03:34:25.760127    9658 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:25.763291    9658 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:25.765776    9658 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:25.765818    9658 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:25.767664    9658 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:25.767581    9658 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:25.769397    9658 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:25.770127    9658 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:25.770387    9658 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:25.770882    9658 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:25.771936    9658 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:25.772065    9658 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:25.772941    9658 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1209 03:34:25.773086    9658 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:25.774252    9658 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:25.774584    9658 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1209 03:34:26.366021    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:26.372345    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:26.379720    9658 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1209 03:34:26.379758    9658 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:26.379862    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:26.387590    9658 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1209 03:34:26.387609    9658 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:26.387663    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:26.392829    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:26.401690    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1209 03:34:26.406041    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1209 03:34:26.412692    9658 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1209 03:34:26.412727    9658 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:26.412797    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:26.422513    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1209 03:34:26.455672    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:26.465946    9658 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1209 03:34:26.465970    9658 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:26.466033    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:26.476349    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1209 03:34:26.485895    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:26.496813    9658 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1209 03:34:26.496831    9658 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:26.496896    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:26.506902    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1209 03:34:26.548356    9658 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1209 03:34:26.548510    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:26.559709    9658 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1209 03:34:26.559733    9658 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:26.559797    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:26.570314    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1209 03:34:26.570439    9658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1209 03:34:26.572092    9658 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1209 03:34:26.572104    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1209 03:34:26.619306    9658 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1209 03:34:26.619319    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1209 03:34:26.630221    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1209 03:34:26.666193    9658 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1209 03:34:26.666233    9658 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1209 03:34:26.666254    9658 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1209 03:34:26.666317    9658 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1209 03:34:26.678424    9658 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1209 03:34:26.678552    9658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1209 03:34:26.680065    9658 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1209 03:34:26.680086    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1209 03:34:26.687677    9658 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1209 03:34:26.687684    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W1209 03:34:26.688307    9658 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1209 03:34:26.688422    9658 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:26.718311    9658 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1209 03:34:26.718349    9658 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1209 03:34:26.718368    9658 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:26.718435    9658 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:26.736813    9658 cache_images.go:92] duration metric: took 982.300958ms to LoadCachedImages
	W1209 03:34:26.736857    9658 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1209 03:34:26.736862    9658 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1209 03:34:26.736917    9658 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-765000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 03:34:26.737000    9658 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1209 03:34:26.750738    9658 cni.go:84] Creating CNI manager for ""
	I1209 03:34:26.750751    9658 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:34:26.750760    9658 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 03:34:26.750768    9658 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-765000 NodeName:running-upgrade-765000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 03:34:26.750847    9658 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-765000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 03:34:26.750916    9658 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1209 03:34:26.754540    9658 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 03:34:26.754578    9658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 03:34:26.757672    9658 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1209 03:34:26.763427    9658 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 03:34:26.768487    9658 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1209 03:34:26.773609    9658 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1209 03:34:26.774821    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:26.858362    9658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:34:26.864194    9658 certs.go:68] Setting up /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000 for IP: 10.0.2.15
	I1209 03:34:26.864206    9658 certs.go:194] generating shared ca certs ...
	I1209 03:34:26.864215    9658 certs.go:226] acquiring lock for ca certs: {Name:mkab7ef03786804c126b265c91619df81c881ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:26.864370    9658 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.key
	I1209 03:34:26.864612    9658 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/proxy-client-ca.key
	I1209 03:34:26.864622    9658 certs.go:256] generating profile certs ...
	I1209 03:34:26.864804    9658 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/client.key
	I1209 03:34:26.864819    9658 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.key.7fa6ebc5
	I1209 03:34:26.864831    9658 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.crt.7fa6ebc5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1209 03:34:26.995838    9658 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.crt.7fa6ebc5 ...
	I1209 03:34:26.995847    9658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.crt.7fa6ebc5: {Name:mk3d8b0b158c1e7ed7c5c1d9d3c8299c2774743f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:26.996194    9658 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.key.7fa6ebc5 ...
	I1209 03:34:26.996200    9658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.key.7fa6ebc5: {Name:mk5e0412c77b429448e56f506b3d7f4b764e026f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:26.996372    9658 certs.go:381] copying /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.crt.7fa6ebc5 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.crt
	I1209 03:34:26.996509    9658 certs.go:385] copying /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.key.7fa6ebc5 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.key
	I1209 03:34:26.996865    9658 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/proxy-client.key
	I1209 03:34:26.997022    9658 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/7820.pem (1338 bytes)
	W1209 03:34:26.997199    9658 certs.go:480] ignoring /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/7820_empty.pem, impossibly tiny 0 bytes
	I1209 03:34:26.997205    9658 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 03:34:26.997376    9658 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem (1078 bytes)
	I1209 03:34:26.997567    9658 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem (1123 bytes)
	I1209 03:34:26.998237    9658 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/key.pem (1675 bytes)
	I1209 03:34:26.998354    9658 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem (1708 bytes)
	I1209 03:34:26.998920    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 03:34:27.006287    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 03:34:27.013070    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 03:34:27.019718    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 03:34:27.026259    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 03:34:27.033368    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 03:34:27.040667    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 03:34:27.047400    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 03:34:27.054166    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/7820.pem --> /usr/share/ca-certificates/7820.pem (1338 bytes)
	I1209 03:34:27.061132    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem --> /usr/share/ca-certificates/78202.pem (1708 bytes)
	I1209 03:34:27.067725    9658 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 03:34:27.074521    9658 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 03:34:27.079749    9658 ssh_runner.go:195] Run: openssl version
	I1209 03:34:27.081414    9658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7820.pem && ln -fs /usr/share/ca-certificates/7820.pem /etc/ssl/certs/7820.pem"
	I1209 03:34:27.084961    9658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7820.pem
	I1209 03:34:27.086525    9658 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 11:22 /usr/share/ca-certificates/7820.pem
	I1209 03:34:27.086573    9658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7820.pem
	I1209 03:34:27.088520    9658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7820.pem /etc/ssl/certs/51391683.0"
	I1209 03:34:27.091161    9658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78202.pem && ln -fs /usr/share/ca-certificates/78202.pem /etc/ssl/certs/78202.pem"
	I1209 03:34:27.094269    9658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78202.pem
	I1209 03:34:27.095923    9658 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 11:22 /usr/share/ca-certificates/78202.pem
	I1209 03:34:27.095960    9658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78202.pem
	I1209 03:34:27.097849    9658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78202.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 03:34:27.100877    9658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 03:34:27.103712    9658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:34:27.105056    9658 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:34:27.105081    9658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:34:27.106835    9658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 03:34:27.109732    9658 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 03:34:27.111063    9658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 03:34:27.112901    9658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 03:34:27.115015    9658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 03:34:27.116809    9658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 03:34:27.120216    9658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 03:34:27.121772    9658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 03:34:27.123692    9658 kubeadm.go:392] StartCluster: {Name:running-upgrade-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60625 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 03:34:27.123774    9658 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 03:34:27.134195    9658 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 03:34:27.137397    9658 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 03:34:27.137412    9658 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 03:34:27.137447    9658 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 03:34:27.140135    9658 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:34:27.140508    9658 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-765000" does not appear in /Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:34:27.140602    9658 kubeconfig.go:62] /Users/jenkins/minikube-integration/20068-6536/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-765000" cluster setting kubeconfig missing "running-upgrade-765000" context setting]
	I1209 03:34:27.140793    9658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/kubeconfig: {Name:mkcab6edf2c02dd56919c96ee93c72d0b668d23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:27.141235    9658 kapi.go:59] client config for running-upgrade-765000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/client.key", CAFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10431f740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 03:34:27.141692    9658 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 03:34:27.144552    9658 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-765000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1209 03:34:27.144562    9658 kubeadm.go:1160] stopping kube-system containers ...
	I1209 03:34:27.144612    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 03:34:27.155897    9658 docker.go:483] Stopping containers: [1c740c03f549 17f5919310d0 f8298f4cf6b7 4a0867c31619 a42b643cfd15 ea74a1ab70f6 d3cb70f32269 7308bbae6c56 67a9fa94ff40 8c650fdc680b a11102cde514 1a73eb25f21c d3844cda8a7f 3aedd3462ec5 3305f5c92771 f22821d4ef46 41e895ffc8b0 7fa6b2f2ffef 499eb08d6e00 220cd1904346 266a6560f67c 76a9b1fd66d5 759fee327ac1 7ffc44e0f4b3 4cc6da64f4fb]
	I1209 03:34:27.155978    9658 ssh_runner.go:195] Run: docker stop 1c740c03f549 17f5919310d0 f8298f4cf6b7 4a0867c31619 a42b643cfd15 ea74a1ab70f6 d3cb70f32269 7308bbae6c56 67a9fa94ff40 8c650fdc680b a11102cde514 1a73eb25f21c d3844cda8a7f 3aedd3462ec5 3305f5c92771 f22821d4ef46 41e895ffc8b0 7fa6b2f2ffef 499eb08d6e00 220cd1904346 266a6560f67c 76a9b1fd66d5 759fee327ac1 7ffc44e0f4b3 4cc6da64f4fb
	I1209 03:34:27.167451    9658 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 03:34:27.254344    9658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 03:34:27.258276    9658 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Dec  9 11:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Dec  9 11:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec  9 11:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Dec  9 11:33 /etc/kubernetes/scheduler.conf
	
	I1209 03:34:27.258316    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/admin.conf
	I1209 03:34:27.261579    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:34:27.261617    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 03:34:27.264900    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/kubelet.conf
	I1209 03:34:27.268049    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:34:27.268085    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 03:34:27.270739    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/controller-manager.conf
	I1209 03:34:27.273504    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:34:27.273532    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 03:34:27.276948    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/scheduler.conf
	I1209 03:34:27.279625    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:34:27.279652    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 03:34:27.282209    9658 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 03:34:27.285621    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:27.309121    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:27.736933    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:27.974120    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:28.005576    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:28.032764    9658 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:34:28.032851    9658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:34:28.534917    9658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:34:29.034900    9658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:34:29.039632    9658 api_server.go:72] duration metric: took 1.006890083s to wait for apiserver process to appear ...
	I1209 03:34:29.039641    9658 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:34:29.039657    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:28.577510    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:28.577530    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:34.041622    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:34.041644    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:33.578459    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:33.578497    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:39.042110    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:39.042130    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:38.579795    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:38.579836    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:44.042419    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:44.042462    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:43.581410    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:43.581429    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:49.042923    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:49.042989    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:48.583440    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:48.583498    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:54.043691    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:54.043796    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:53.585719    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:53.585768    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:59.045403    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:59.045440    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:58.586483    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:58.586579    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:04.046805    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:04.046906    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:03.589050    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:03.589105    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:09.049310    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:09.049402    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:08.591519    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:08.591568    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:14.051792    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:14.051814    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:13.593858    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:13.595009    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:13.610807    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:35:13.610909    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:13.630786    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:35:13.630872    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:13.641086    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:35:13.641178    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:13.651192    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:35:13.651278    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:13.661414    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:35:13.661496    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:13.671940    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:35:13.672025    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:13.682354    9647 logs.go:282] 0 containers: []
	W1209 03:35:13.682366    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:13.682437    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:13.692996    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:35:13.693014    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:35:13.693021    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:35:13.704904    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:35:13.704917    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:35:13.716569    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:35:13.716582    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:13.729349    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:13.729361    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:13.733441    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:35:13.733449    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:35:13.748481    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:35:13.748492    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:35:13.763283    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:13.763293    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:13.800027    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:13.800037    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:13.906630    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:35:13.906643    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:35:13.918488    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:35:13.918499    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:35:13.936860    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:35:13.936872    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:35:13.963088    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:35:13.963101    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:35:13.978149    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:35:13.978162    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:35:13.995209    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:35:13.995223    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:35:14.006465    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:14.006486    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:14.030727    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:35:14.030738    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:35:14.044568    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:35:14.044585    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:35:16.557762    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:19.052117    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:19.052206    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:21.558017    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:21.558271    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:21.580397    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:35:21.580506    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:21.594834    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:35:21.594918    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:21.606915    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:35:21.606995    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:21.617634    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:35:21.617720    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:21.634040    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:35:21.634121    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:21.644394    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:35:21.644474    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:21.654338    9647 logs.go:282] 0 containers: []
	W1209 03:35:21.654348    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:21.654416    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:21.664775    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:35:21.664792    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:35:21.664797    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:35:21.689610    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:35:21.689624    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:35:21.704455    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:35:21.704465    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:35:21.719737    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:35:21.719747    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:35:21.731851    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:35:21.731865    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:35:21.746240    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:35:21.746250    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:35:21.760164    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:35:21.760178    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:35:21.785278    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:21.785289    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:21.809834    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:35:21.809844    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:21.822020    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:21.822031    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:21.858602    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:35:21.858616    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:35:21.874034    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:35:21.874044    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:35:21.885439    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:35:21.885452    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:35:21.898115    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:21.898125    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:21.936175    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:21.936184    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:21.940679    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:35:21.940686    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:35:21.952649    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:35:21.952660    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:35:24.054755    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:24.054852    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:24.466385    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:29.056561    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:29.056946    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:29.088581    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:35:29.088729    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:29.107892    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:35:29.108001    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:29.122174    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:35:29.122244    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:29.134154    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:35:29.134236    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:29.144825    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:35:29.144901    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:29.156100    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:35:29.156185    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:29.166729    9658 logs.go:282] 0 containers: []
	W1209 03:35:29.166741    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:29.166812    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:29.185572    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:35:29.185587    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:35:29.185593    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:35:29.197304    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:35:29.197317    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:29.211014    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:29.211025    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:29.250961    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:35:29.250971    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:35:29.292966    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:35:29.292978    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:35:29.308524    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:35:29.308536    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:35:29.325338    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:35:29.325349    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:35:29.337436    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:29.337444    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:29.341753    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:29.341762    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:29.441603    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:35:29.441616    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:35:29.463447    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:35:29.463458    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:35:29.482623    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:35:29.482633    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:35:29.499240    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:35:29.499253    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:35:29.512119    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:35:29.512132    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:35:29.532338    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:35:29.532350    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:35:29.545374    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:35:29.545386    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:35:29.560960    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:35:29.560975    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:35:29.573997    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:35:29.574010    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:35:29.589188    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:29.589199    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:29.468777    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:29.468906    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:29.480383    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:35:29.480464    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:29.492155    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:35:29.492246    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:29.503309    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:35:29.503396    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:29.514312    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:35:29.514398    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:29.525161    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:35:29.525244    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:29.536183    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:35:29.536323    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:29.547640    9647 logs.go:282] 0 containers: []
	W1209 03:35:29.547651    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:29.547719    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:29.562802    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:35:29.562817    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:35:29.562822    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:35:29.577676    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:29.577693    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:29.605030    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:29.605041    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:29.645822    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:35:29.645833    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:35:29.659748    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:35:29.659758    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:35:29.673694    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:35:29.673707    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:35:29.691261    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:35:29.691270    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:35:29.704039    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:29.704049    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:29.740163    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:35:29.740175    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:35:29.753645    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:35:29.753658    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:35:29.768589    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:35:29.768603    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:29.782455    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:35:29.782467    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:35:29.797010    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:35:29.797025    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:35:29.811845    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:35:29.811855    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:35:29.827846    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:29.827859    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:29.832108    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:35:29.832113    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:35:29.856645    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:35:29.856654    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:35:32.370283    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:32.118371    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:37.372438    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:37.372534    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:37.384218    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:35:37.384302    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:37.396303    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:35:37.396387    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:37.409172    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:35:37.409251    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:37.426461    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:35:37.426541    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:37.437524    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:35:37.437616    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:37.448951    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:35:37.449035    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:37.460730    9647 logs.go:282] 0 containers: []
	W1209 03:35:37.460745    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:37.460822    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:37.472579    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:35:37.472596    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:35:37.472603    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:35:37.485888    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:35:37.485900    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:35:37.504313    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:35:37.504325    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:35:37.516685    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:35:37.516695    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:35:37.532409    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:37.532421    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:37.560017    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:35:37.560032    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:37.576775    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:35:37.576789    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:35:37.590907    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:35:37.590921    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:35:37.603338    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:35:37.603349    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:35:37.630868    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:35:37.630880    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:35:37.646761    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:35:37.646771    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:35:37.658217    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:37.658231    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:37.697492    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:35:37.697501    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:35:37.718016    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:35:37.718027    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:35:37.731772    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:35:37.731783    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:35:37.749826    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:37.749837    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:37.753935    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:37.753943    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:37.120562    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:37.120841    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:37.142542    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:35:37.142669    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:37.157534    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:35:37.157616    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:37.169865    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:35:37.169966    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:37.182459    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:35:37.182537    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:37.193137    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:35:37.193216    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:37.203735    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:35:37.203808    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:37.214331    9658 logs.go:282] 0 containers: []
	W1209 03:35:37.214342    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:37.214408    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:37.226787    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:35:37.226804    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:35:37.226810    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:35:37.238170    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:35:37.238182    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:35:37.257613    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:35:37.257622    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:35:37.269663    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:35:37.269676    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:35:37.281855    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:35:37.281866    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:35:37.299546    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:37.299556    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:37.342344    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:35:37.342356    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:35:37.354097    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:37.354109    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:37.381104    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:35:37.381121    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:35:37.420494    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:37.420512    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:37.425805    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:35:37.425815    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:35:37.438437    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:35:37.438448    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:35:37.450176    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:37.450186    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:37.494239    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:35:37.494268    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:35:37.516302    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:35:37.516314    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:35:37.531826    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:35:37.531841    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:35:37.550639    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:35:37.550652    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:35:37.562843    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:35:37.562853    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:37.577271    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:35:37.577279    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:35:40.094599    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:40.291165    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:45.096993    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:45.097304    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:45.124809    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:35:45.124950    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:45.142567    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:35:45.142667    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:45.155624    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:35:45.155713    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:45.167149    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:35:45.167236    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:45.178109    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:35:45.178185    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:45.189570    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:35:45.189655    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:45.199919    9658 logs.go:282] 0 containers: []
	W1209 03:35:45.199930    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:45.199999    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:45.210351    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:35:45.210368    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:45.210373    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:45.251818    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:45.251830    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:45.289245    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:35:45.289258    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:35:45.304166    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:35:45.304183    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:35:45.341214    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:35:45.341226    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:35:45.356004    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:35:45.356019    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:35:45.368355    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:35:45.368370    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:35:45.392887    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:35:45.392897    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:35:45.405389    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:35:45.405401    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:35:45.423845    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:35:45.423864    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:35:45.436202    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:45.436216    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:45.462569    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:35:45.462582    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:35:45.476740    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:35:45.476759    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:35:45.495883    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:35:45.495899    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:45.508739    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:35:45.508755    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:35:45.523480    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:35:45.523490    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:35:45.549382    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:35:45.549397    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:35:45.562533    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:45.562545    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:45.568633    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:35:45.568642    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:35:45.294776    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:45.294879    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:45.306257    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:35:45.306343    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:45.317239    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:35:45.317328    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:45.328866    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:35:45.328948    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:45.339969    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:35:45.340051    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:45.351445    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:35:45.351531    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:45.363103    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:35:45.363189    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:45.375053    9647 logs.go:282] 0 containers: []
	W1209 03:35:45.375066    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:45.375147    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:45.390579    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:35:45.390599    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:45.390608    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:45.429562    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:35:45.429575    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:35:45.445669    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:45.445680    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:45.472216    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:35:45.472242    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:35:45.487072    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:35:45.487087    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:35:45.502750    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:35:45.502762    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:35:45.521081    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:35:45.521096    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:35:45.533218    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:35:45.533231    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:45.550635    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:45.550645    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:45.591916    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:45.591928    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:45.596578    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:35:45.596589    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:35:45.629489    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:35:45.629501    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:35:45.641189    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:35:45.641201    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:35:45.656114    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:35:45.656123    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:35:45.675720    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:35:45.675732    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:35:45.689705    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:35:45.689716    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:35:45.702174    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:35:45.702186    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:35:48.082949    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:48.219171    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:53.085174    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:53.085421    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:53.108231    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:35:53.108361    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:53.125219    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:35:53.125320    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:53.138621    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:35:53.138714    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:53.153746    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:35:53.153832    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:53.164573    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:35:53.164654    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:53.175051    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:35:53.175142    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:53.188741    9658 logs.go:282] 0 containers: []
	W1209 03:35:53.188753    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:53.188824    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:53.199438    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:35:53.199461    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:53.199466    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:53.239470    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:53.239490    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:53.281446    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:35:53.281462    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:35:53.293656    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:35:53.293669    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:35:53.306205    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:35:53.306218    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:35:53.318457    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:35:53.318469    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:35:53.355051    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:35:53.355072    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:35:53.371090    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:35:53.371102    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:35:53.383223    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:35:53.383237    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:35:53.403224    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:53.403237    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:53.431736    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:35:53.431749    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:35:53.444416    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:53.444427    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:53.448807    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:35:53.448816    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:35:53.463347    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:35:53.463363    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:35:53.478853    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:35:53.478867    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:35:53.490995    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:35:53.491007    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:35:53.504978    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:35:53.504990    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:35:53.525804    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:35:53.525815    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:35:53.548375    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:35:53.548395    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:53.219669    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:53.219760    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:53.230653    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:35:53.230749    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:53.241003    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:35:53.241088    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:53.252593    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:35:53.252677    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:53.263792    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:35:53.263883    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:53.275135    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:35:53.275216    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:53.286909    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:35:53.286988    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:53.298164    9647 logs.go:282] 0 containers: []
	W1209 03:35:53.298175    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:53.298245    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:53.309846    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:35:53.309863    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:53.309869    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:53.314254    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:35:53.314266    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:35:53.328760    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:35:53.328774    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:35:53.344256    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:35:53.344271    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:35:53.356409    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:53.356418    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:53.396009    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:35:53.396033    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:35:53.408591    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:35:53.408604    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:35:53.424826    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:53.424836    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:53.451026    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:35:53.451036    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:35:53.469838    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:35:53.469851    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:35:53.482021    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:35:53.482035    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:35:53.500842    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:35:53.500857    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:35:53.513339    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:35:53.513352    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:53.525916    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:53.525926    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:53.564509    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:35:53.564525    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:35:53.590118    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:35:53.590130    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:35:53.601169    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:35:53.601181    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:35:56.130062    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:56.062892    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:01.130447    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:01.130562    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:01.141862    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:01.141955    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:01.153233    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:01.153320    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:01.165460    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:01.165542    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:01.176664    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:01.176747    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:01.193254    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:01.193334    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:01.204605    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:01.204687    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:01.217444    9647 logs.go:282] 0 containers: []
	W1209 03:36:01.217460    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:01.217532    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:01.229057    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:01.229078    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:01.229085    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:01.254930    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:01.254943    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:01.268094    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:01.268107    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:01.282606    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:01.282621    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:01.319621    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:01.319637    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:01.334331    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:01.334345    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:01.347028    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:01.347040    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:01.366242    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:01.366254    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:01.382511    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:01.382523    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:01.408271    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:01.408291    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:01.449604    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:01.449623    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:01.468579    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:01.468591    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:01.473179    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:01.473186    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:01.487274    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:01.487289    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:01.502530    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:01.502543    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:01.515038    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:01.515050    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:01.530449    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:01.530463    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:01.065508    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:01.065813    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:01.091134    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:01.091276    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:01.108425    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:01.108532    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:01.125100    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:01.125193    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:01.136392    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:01.136473    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:01.148283    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:01.148367    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:01.160180    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:01.160270    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:01.171618    9658 logs.go:282] 0 containers: []
	W1209 03:36:01.171633    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:01.171710    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:01.182803    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:01.182822    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:01.182829    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:01.187888    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:01.187898    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:01.203085    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:01.203102    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:01.223369    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:01.223383    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:01.239521    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:01.239534    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:01.258291    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:01.258302    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:01.271163    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:01.271174    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:01.299173    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:01.299189    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:01.339687    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:01.339698    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:01.355218    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:01.355235    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:01.380178    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:01.380190    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:01.396883    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:01.396893    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:01.414459    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:01.414470    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:01.428378    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:01.428389    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:01.440071    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:01.440082    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:01.451806    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:01.451818    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:01.496495    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:01.496517    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:01.539746    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:01.539764    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:01.556844    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:01.556858    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:04.071736    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:04.045150    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:09.073925    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:09.074105    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:09.092985    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:09.093092    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:09.111293    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:09.111384    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:09.123385    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:09.123474    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:09.135259    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:09.135333    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:09.157518    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:09.157566    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:09.169066    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:09.169141    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:09.180345    9658 logs.go:282] 0 containers: []
	W1209 03:36:09.180358    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:09.180433    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:09.192649    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:09.192665    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:09.192670    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:09.197455    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:09.197466    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:09.213005    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:09.213022    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:09.249209    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:09.249224    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:09.261579    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:09.261589    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:09.282247    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:09.282265    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:09.296881    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:09.296893    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:09.310492    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:09.310501    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:09.354564    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:09.354577    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:09.399373    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:09.399386    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:09.414932    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:09.414945    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:09.427237    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:09.427254    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:09.441645    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:09.441657    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:09.458991    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:09.459002    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:09.479181    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:09.479199    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:09.498045    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:09.498055    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:09.522950    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:09.522962    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:09.537381    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:09.537392    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:09.548500    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:09.548514    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:09.047458    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:09.047744    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:09.075754    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:09.075850    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:09.094994    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:09.095067    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:09.109014    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:09.109104    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:09.120937    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:09.121024    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:09.132371    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:09.132448    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:09.146984    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:09.147064    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:09.157028    9647 logs.go:282] 0 containers: []
	W1209 03:36:09.157040    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:09.157109    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:09.168853    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:09.168876    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:09.168884    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:09.173499    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:09.173511    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:09.188991    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:09.189005    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:09.204922    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:09.204943    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:09.220660    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:09.220675    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:09.232871    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:09.232885    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:09.273841    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:09.273859    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:09.286171    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:09.286186    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:09.310130    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:09.310142    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:09.345714    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:09.345729    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:09.364505    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:09.364520    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:09.376738    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:09.376751    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:09.389068    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:09.389081    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:09.408158    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:09.408172    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:09.434208    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:09.434220    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:09.447053    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:09.447064    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:09.473478    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:09.473490    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:11.989496    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:12.061871    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:16.991684    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:16.991952    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:17.017971    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:17.018068    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:17.033170    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:17.033256    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:17.044055    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:17.044137    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:17.054847    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:17.054937    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:17.065303    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:17.065359    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:17.081046    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:17.081126    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:17.092110    9647 logs.go:282] 0 containers: []
	W1209 03:36:17.092125    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:17.092198    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:17.103859    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:17.103881    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:17.103887    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:17.108580    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:17.108595    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:17.134651    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:17.134665    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:17.147857    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:17.147871    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:17.162286    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:17.162302    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:17.174322    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:17.174335    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:17.187223    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:17.187234    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:17.206988    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:17.207003    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:17.219937    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:17.219948    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:17.236001    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:17.236011    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:17.248382    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:17.248390    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:17.287832    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:17.287854    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:17.326761    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:17.326773    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:17.354381    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:17.354398    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:17.372913    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:17.372927    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:17.388637    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:17.388650    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:17.403603    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:17.403614    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:17.063992    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:17.064101    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:17.075324    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:17.075406    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:17.086805    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:17.086888    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:17.098465    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:17.098546    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:17.109879    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:17.109953    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:17.121153    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:17.121237    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:17.131976    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:17.132065    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:17.143267    9658 logs.go:282] 0 containers: []
	W1209 03:36:17.143279    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:17.143354    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:17.155390    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:17.155407    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:17.155414    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:17.177682    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:17.177697    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:17.216318    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:17.216332    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:17.228541    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:17.228554    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:17.247209    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:17.247228    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:17.259995    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:17.260007    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:17.264601    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:17.264609    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:17.277547    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:17.277558    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:17.292659    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:17.292676    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:17.304985    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:17.304997    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:17.322921    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:17.322941    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:17.335893    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:17.335905    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:17.362344    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:17.362356    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:17.375612    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:17.375625    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:17.416524    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:17.416538    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:17.430892    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:17.430903    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:17.442763    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:17.442773    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:17.454250    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:17.454265    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:17.493569    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:17.493577    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:20.009121    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:19.920600    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:25.009382    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:25.009477    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:25.021222    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:25.021310    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:25.032968    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:25.033057    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:25.043977    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:25.044067    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:25.055310    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:25.055397    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:25.067669    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:25.067757    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:25.078899    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:25.078986    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:25.090080    9658 logs.go:282] 0 containers: []
	W1209 03:36:25.090094    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:25.090168    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:25.105964    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:25.106001    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:25.106006    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:25.118799    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:25.118807    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:25.145916    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:25.145928    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:25.161634    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:25.161646    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:25.180873    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:25.180886    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:25.207913    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:25.207925    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:25.219628    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:25.219640    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:25.261300    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:25.261309    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:25.266440    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:25.266451    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:25.303821    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:25.303836    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:25.339572    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:25.339583    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:25.357576    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:25.357591    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:25.368489    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:25.368502    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:25.379837    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:25.379848    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:25.393792    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:25.393807    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:25.411167    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:25.411178    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:25.422630    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:25.422644    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:25.436197    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:25.436211    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:25.447292    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:25.447303    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:24.922922    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:24.923156    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:24.941853    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:24.941979    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:24.956257    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:24.956349    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:24.968530    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:24.968601    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:24.978852    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:24.978941    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:24.990120    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:24.990203    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:25.000424    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:25.000501    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:25.011281    9647 logs.go:282] 0 containers: []
	W1209 03:36:25.011292    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:25.011360    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:25.023391    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:25.023410    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:25.023415    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:25.061855    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:25.061869    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:25.077082    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:25.077098    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:25.102985    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:25.103006    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:25.117754    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:25.117768    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:25.133309    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:25.133321    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:25.148978    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:25.148989    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:25.161850    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:25.161862    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:25.174777    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:25.174792    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:25.187561    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:25.187573    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:25.230604    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:25.230618    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:25.242855    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:25.242866    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:25.261251    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:25.261266    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:25.276994    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:25.277009    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:25.281385    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:25.281395    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:25.297421    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:25.297433    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:25.314432    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:25.314444    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:27.963074    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:27.840883    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:32.965379    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:32.965490    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:32.979530    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:32.979619    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:32.991266    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:32.991350    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:33.003707    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:33.003802    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:33.015469    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:33.015554    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:33.027376    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:33.027459    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:33.038607    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:33.038692    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:33.049890    9658 logs.go:282] 0 containers: []
	W1209 03:36:33.049902    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:33.049980    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:33.061875    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:33.061892    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:33.061897    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:33.076524    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:33.076537    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:33.089377    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:33.089388    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:33.104145    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:33.104158    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:33.119542    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:33.119551    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:33.131510    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:33.131522    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:33.149749    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:33.149759    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:33.162839    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:33.162851    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:33.204419    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:33.204436    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:33.209238    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:33.209248    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:33.221512    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:33.221527    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:33.234687    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:33.234700    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:33.270161    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:33.270170    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:33.282672    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:33.282683    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:33.302880    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:33.302891    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:33.321492    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:33.321507    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:33.338680    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:33.338693    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:33.363185    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:33.363198    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:33.403148    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:33.403162    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:32.843093    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:32.843224    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:32.855170    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:32.855269    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:32.866170    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:32.866267    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:32.884795    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:32.884883    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:32.896012    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:32.896100    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:32.907162    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:32.907246    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:32.918338    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:32.918416    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:32.932128    9647 logs.go:282] 0 containers: []
	W1209 03:36:32.932143    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:32.932214    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:32.942394    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:32.942413    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:32.942419    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:32.954242    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:32.954257    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:32.969393    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:32.969404    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:32.983152    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:32.983164    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:32.996139    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:32.996153    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:33.012091    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:33.012103    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:33.038917    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:33.038927    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:33.051102    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:33.051111    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:33.078763    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:33.078777    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:33.118610    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:33.118622    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:33.146304    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:33.146315    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:33.186081    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:33.186095    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:33.200629    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:33.200641    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:33.214811    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:33.214825    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:33.227000    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:33.227014    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:33.239895    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:33.239906    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:33.265706    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:33.265718    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:35.772118    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:35.920275    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:40.774898    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:40.775415    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:40.815606    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:40.815759    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:40.834961    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:40.835070    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:40.849125    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:40.849219    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:40.861711    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:40.861786    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:40.872422    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:40.872513    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:40.886764    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:40.886844    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:40.897740    9647 logs.go:282] 0 containers: []
	W1209 03:36:40.897757    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:40.897826    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:40.908641    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:40.908659    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:40.908665    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:40.920563    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:40.920577    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:40.946356    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:40.946375    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:40.973247    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:40.973266    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:40.990378    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:40.990393    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:41.002306    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:41.002315    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:41.020867    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:41.020882    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:41.036224    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:41.036237    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:41.049013    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:41.049022    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:41.087022    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:41.087037    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:41.091852    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:41.091863    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:41.107012    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:41.107023    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:41.121540    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:41.121551    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:41.137901    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:41.137910    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:41.153441    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:41.153453    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:41.173483    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:41.173496    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:41.186057    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:41.186069    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:40.922476    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:40.922584    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:40.934328    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:40.934422    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:40.946465    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:40.946545    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:40.958403    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:40.958493    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:40.974068    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:40.974147    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:40.986268    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:40.986353    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:41.001338    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:41.001427    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:41.012401    9658 logs.go:282] 0 containers: []
	W1209 03:36:41.012416    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:41.012494    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:41.027639    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:41.027654    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:41.027662    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:41.048839    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:41.048851    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:41.061758    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:41.061771    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:41.088647    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:41.088658    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:41.108203    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:41.108212    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:41.123980    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:41.123989    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:41.136043    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:41.136055    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:41.150637    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:41.150649    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:41.163352    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:41.163364    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:41.175798    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:41.175811    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:41.220337    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:41.220358    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:41.225996    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:41.226004    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:41.263618    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:41.263635    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:41.275810    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:41.275820    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:41.297736    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:41.297747    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:41.333225    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:41.333241    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:41.347206    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:41.347220    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:41.358571    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:41.358582    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:41.371068    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:41.371081    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:43.894463    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:43.729588    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:48.896732    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:48.896861    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:48.909626    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:48.909723    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:48.921217    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:48.921311    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:48.933126    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:48.933208    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:48.949726    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:48.949810    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:48.961374    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:48.961456    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:48.973744    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:48.973830    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:48.985507    9658 logs.go:282] 0 containers: []
	W1209 03:36:48.985520    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:48.985594    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:48.997173    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:48.997190    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:48.997196    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:49.035770    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:49.035784    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:49.071842    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:49.071853    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:49.087318    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:49.087329    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:49.100441    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:49.100452    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:49.113393    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:49.113403    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:49.132762    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:49.132776    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:49.145588    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:49.145602    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:49.158244    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:49.158259    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:49.174486    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:49.174499    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:49.186259    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:49.186270    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:49.227967    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:49.227984    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:49.232913    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:49.232919    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:49.248091    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:49.248103    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:49.267331    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:49.267346    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:49.280428    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:49.280439    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:49.291469    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:49.291479    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:49.302892    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:49.302902    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:49.320229    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:49.320243    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:48.732037    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:48.732539    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:48.764110    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:48.764310    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:48.785993    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:48.786115    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:48.799402    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:48.799501    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:48.811672    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:48.811755    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:48.825547    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:48.825627    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:48.835722    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:48.835806    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:48.846195    9647 logs.go:282] 0 containers: []
	W1209 03:36:48.846206    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:48.846274    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:48.856510    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:48.856527    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:48.856533    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:48.870461    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:48.870475    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:48.884101    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:48.884111    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:48.903540    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:48.903551    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:48.916593    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:48.916611    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:48.940541    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:48.940559    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:48.952728    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:48.952740    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:48.967082    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:48.967095    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:49.006591    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:49.006610    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:49.011625    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:49.011638    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:49.057410    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:49.057421    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:49.069532    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:49.069547    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:49.084603    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:49.084616    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:49.110432    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:49.110451    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:49.122693    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:49.122707    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:49.135633    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:49.135644    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:49.152426    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:49.152439    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:51.673644    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:51.846861    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:56.676057    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:56.676519    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:56.719410    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:56.719568    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:56.737197    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:56.737311    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:56.751540    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:56.751641    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:56.763793    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:56.763872    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:56.774533    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:56.774615    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:56.790051    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:56.790130    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:56.800174    9647 logs.go:282] 0 containers: []
	W1209 03:36:56.800187    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:56.800262    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:56.810509    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:56.810530    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:56.810536    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:56.824717    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:56.824730    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:56.856570    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:56.856582    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:56.875529    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:56.875539    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:56.890554    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:56.890566    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:56.907934    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:56.907947    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:56.920284    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:56.920296    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:56.933358    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:56.933369    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:56.948947    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:56.948965    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:56.960532    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:56.960543    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:56.973146    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:56.973158    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:56.997355    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:56.997372    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:57.039565    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:57.039589    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:57.044441    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:57.044451    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:57.082477    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:57.082490    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:57.098315    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:57.098328    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:57.117129    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:57.117141    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:56.847722    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:56.847819    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:56.858972    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:36:56.859055    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:56.870247    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:36:56.870332    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:56.881557    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:36:56.881646    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:56.893127    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:36:56.893210    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:56.904707    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:36:56.904803    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:56.916407    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:36:56.916493    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:56.929020    9658 logs.go:282] 0 containers: []
	W1209 03:36:56.929034    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:56.929107    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:56.940502    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:36:56.940518    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:36:56.940524    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:36:56.953674    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:36:56.953685    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:36:56.973879    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:36:56.973888    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:36:56.992206    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:36:56.992220    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:36:57.004715    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:36:57.004727    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:57.019152    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:57.019167    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:57.060824    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:36:57.060838    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:36:57.075910    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:36:57.075924    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:36:57.088769    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:36:57.088782    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:36:57.109703    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:36:57.109719    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:36:57.122305    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:36:57.122316    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:36:57.134173    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:57.134185    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:57.159173    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:57.159182    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:57.200113    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:57.200125    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:57.205144    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:36:57.205153    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:36:57.218857    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:36:57.218868    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:36:57.230266    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:36:57.230279    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:36:57.269927    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:36:57.269939    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:36:57.284653    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:36:57.284663    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:36:59.798893    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:59.635884    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:04.800423    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:04.800545    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:04.812501    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:04.812587    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:04.827687    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:04.827773    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:04.839433    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:04.839512    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:04.851440    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:04.851520    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:04.862429    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:04.862509    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:04.873273    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:04.873353    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:04.884350    9658 logs.go:282] 0 containers: []
	W1209 03:37:04.884360    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:04.884432    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:04.896325    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:04.896342    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:04.896348    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:04.914478    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:04.914492    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:04.926662    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:04.926674    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:04.947955    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:04.947968    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:04.967393    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:04.967407    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:04.980958    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:04.980971    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:04.993173    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:04.993187    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:05.018331    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:05.018344    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:05.040972    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:05.040983    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:05.056743    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:05.056752    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:05.069386    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:05.069400    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:05.085753    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:05.085767    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:05.097447    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:05.097459    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:05.138530    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:05.138543    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:05.150528    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:05.150540    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:05.164633    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:05.164645    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:05.197742    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:05.197754    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:05.209292    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:05.209305    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:05.213660    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:05.213669    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:04.638424    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:04.638906    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:04.672263    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:04.672417    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:04.693656    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:04.693755    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:04.706291    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:04.706380    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:04.718626    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:04.718709    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:04.729721    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:04.729803    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:04.740930    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:04.741015    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:04.750937    9647 logs.go:282] 0 containers: []
	W1209 03:37:04.750948    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:04.751019    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:04.761760    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:04.761778    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:04.761784    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:04.774159    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:04.774173    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:04.815522    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:04.815535    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:04.831718    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:04.831731    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:04.858310    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:04.858329    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:04.882210    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:04.882223    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:04.887327    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:04.887338    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:04.905764    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:04.905776    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:04.920789    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:04.920805    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:04.939527    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:04.939540    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:04.951722    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:04.951733    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:04.964744    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:04.964756    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:04.977064    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:04.977075    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:04.997061    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:04.997071    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:05.014296    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:05.014306    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:05.055861    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:05.055874    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:05.071534    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:05.071545    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:07.586008    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:07.755466    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:12.588301    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:12.588565    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:12.613211    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:12.613339    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:12.636980    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:12.637077    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:12.652573    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:12.652654    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:12.663732    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:12.663820    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:12.674537    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:12.674616    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:12.684799    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:12.684874    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:12.695858    9647 logs.go:282] 0 containers: []
	W1209 03:37:12.695870    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:12.695929    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:12.706573    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:12.706591    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:12.706601    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:12.718393    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:12.718406    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:12.729991    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:12.730002    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:12.742357    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:12.742369    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:12.755975    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:12.755988    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:12.774499    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:12.774514    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:12.797997    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:12.798009    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:12.756143    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:12.756244    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:12.767877    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:12.767966    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:12.779392    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:12.779483    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:12.791243    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:12.791334    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:12.803867    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:12.803951    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:12.818073    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:12.818155    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:12.829246    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:12.829329    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:12.870100    9658 logs.go:282] 0 containers: []
	W1209 03:37:12.870114    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:12.870188    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:12.883384    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:12.883402    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:12.883408    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:12.903765    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:12.903777    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:12.922401    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:12.922409    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:12.935083    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:12.935096    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:12.947231    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:12.947241    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:12.971678    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:12.971690    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:12.984624    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:12.984637    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:12.989432    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:12.989440    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:13.027358    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:13.027369    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:13.041720    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:13.041730    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:13.075501    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:13.075516    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:13.091467    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:13.091490    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:13.103481    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:13.103492    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:13.143043    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:13.143055    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:13.156219    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:13.156232    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:13.167599    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:13.167608    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:13.179487    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:13.179498    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:13.191284    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:13.191294    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:13.208769    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:13.208784    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:12.839615    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:12.839638    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:12.876936    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:12.876949    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:12.889244    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:12.889257    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:12.905173    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:12.905181    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:12.921011    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:12.921023    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:12.945385    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:12.945402    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:12.973053    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:12.973062    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:12.989336    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:12.989348    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:13.001440    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:13.001453    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:13.006088    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:13.006096    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:15.523358    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:15.722393    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:20.525565    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:20.525909    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:20.557338    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:20.557477    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:20.574203    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:20.574361    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:20.586854    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:20.586945    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:20.601522    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:20.601602    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:20.611802    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:20.611887    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:20.622057    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:20.622136    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:20.640643    9647 logs.go:282] 0 containers: []
	W1209 03:37:20.640653    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:20.640717    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:20.650858    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:20.650875    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:20.650881    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:20.662733    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:20.662743    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:20.680714    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:20.680724    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:20.684906    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:20.684913    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:20.718900    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:20.718910    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:20.734703    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:20.734715    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:20.750923    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:20.750934    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:20.769993    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:20.770006    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:20.815652    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:20.815670    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:20.828501    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:20.828515    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:20.840791    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:20.840803    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:20.856779    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:20.856789    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:20.882757    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:20.882768    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:20.897657    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:20.897669    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:20.921916    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:20.921929    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:20.936380    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:20.936393    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:20.952174    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:20.952186    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:20.724737    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:20.724826    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:20.736411    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:20.736500    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:20.747787    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:20.747871    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:20.760309    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:20.760396    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:20.773227    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:20.773305    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:20.784784    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:20.784867    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:20.795973    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:20.796061    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:20.806832    9658 logs.go:282] 0 containers: []
	W1209 03:37:20.806844    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:20.806921    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:20.817824    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:20.817840    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:20.817846    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:20.822751    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:20.822764    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:20.838875    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:20.838889    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:20.855312    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:20.855323    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:20.867971    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:20.867982    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:20.880846    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:20.880858    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:20.904508    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:20.904518    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:20.939691    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:20.939703    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:20.957387    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:20.957398    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:20.999669    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:20.999679    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:21.014310    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:21.014319    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:21.025806    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:21.025817    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:21.044619    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:21.044633    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:21.059496    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:21.059507    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:21.075091    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:21.075102    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:21.110661    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:21.110672    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:21.124792    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:21.124805    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:21.136346    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:21.136357    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:21.155012    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:21.155021    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:23.669300    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:23.465567    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:28.671619    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:28.671731    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:28.684887    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:28.684974    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:28.696185    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:28.696267    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:28.707639    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:28.707721    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:28.718629    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:28.718715    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:28.732554    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:28.732638    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:28.744077    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:28.744162    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:28.755556    9658 logs.go:282] 0 containers: []
	W1209 03:37:28.755568    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:28.755639    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:28.767332    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:28.767347    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:28.767352    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:28.779590    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:28.779603    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:28.792771    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:28.792786    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:28.821991    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:28.822003    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:28.840724    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:28.840738    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:28.859483    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:28.859502    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:28.871741    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:28.871755    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:28.906822    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:28.906832    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:28.939272    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:28.939284    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:28.950415    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:28.950425    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:28.961904    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:28.961913    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:28.977781    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:28.977795    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:28.995041    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:28.995050    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:29.036680    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:29.036694    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:29.040955    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:29.040964    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:29.054790    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:29.054803    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:29.069428    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:29.069440    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:29.084448    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:29.084461    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:29.096206    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:29.096218    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:28.467814    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:28.468002    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:28.481510    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:28.481603    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:28.500548    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:28.500624    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:28.510926    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:28.511002    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:28.522468    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:28.522548    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:28.533878    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:28.533962    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:28.545041    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:28.545115    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:28.554942    9647 logs.go:282] 0 containers: []
	W1209 03:37:28.554958    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:28.555025    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:28.566729    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:28.566749    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:28.566755    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:28.571373    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:28.571380    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:28.593481    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:28.593492    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:28.608711    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:28.608721    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:28.647642    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:28.647649    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:28.661253    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:28.661263    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:28.676920    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:28.676932    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:28.689419    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:28.689431    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:28.713582    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:28.713601    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:28.753028    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:28.753044    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:28.780221    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:28.780231    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:28.796219    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:28.796231    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:28.808627    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:28.808640    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:28.822322    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:28.822332    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:28.835371    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:28.835383    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:28.848685    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:28.848699    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:28.863745    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:28.863761    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:31.376811    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:31.620383    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:36.379165    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:36.379659    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:36.428560    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:36.428670    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:36.464642    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:36.464743    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:36.481565    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:36.481643    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:36.496952    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:36.497039    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:36.507517    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:36.507600    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:36.518598    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:36.518678    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:36.530256    9647 logs.go:282] 0 containers: []
	W1209 03:37:36.530273    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:36.530346    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:36.541123    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:36.541144    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:36.541150    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:36.565206    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:36.565216    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:36.580641    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:36.580656    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:36.598828    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:36.598842    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:36.611401    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:36.611413    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:36.623407    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:36.623415    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:36.637109    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:36.637120    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:36.650355    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:36.650366    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:36.678197    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:36.678212    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:36.693983    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:36.693992    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:36.707784    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:36.707794    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:36.720936    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:36.720952    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:36.736362    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:36.736374    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:36.775989    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:36.776002    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:36.781617    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:36.781633    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:36.798399    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:36.798408    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:36.836270    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:36.836285    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:36.622848    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:36.622944    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:36.634749    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:36.634837    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:36.646576    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:36.646661    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:36.659012    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:36.659102    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:36.670804    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:36.670885    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:36.681667    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:36.681749    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:36.693581    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:36.693658    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:36.704935    9658 logs.go:282] 0 containers: []
	W1209 03:37:36.704946    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:36.705020    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:36.716356    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:36.716373    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:36.716380    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:36.755070    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:36.755082    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:36.771244    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:36.771256    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:36.783943    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:36.783955    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:36.797101    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:36.797113    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:36.817712    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:36.817726    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:36.836979    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:36.836988    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:36.849655    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:36.849667    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:36.891702    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:36.891713    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:36.896164    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:36.896173    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:36.909926    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:36.909937    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:36.921465    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:36.921474    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:36.935330    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:36.935343    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:36.946987    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:36.946998    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:36.958411    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:36.958425    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:36.980545    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:36.980551    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:37.013539    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:37.013551    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:37.031131    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:37.031141    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:37.043198    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:37.043209    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:39.563739    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:39.352288    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:44.564901    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:44.565007    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:44.581858    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:44.581938    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:44.599980    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:44.600063    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:44.611800    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:44.611884    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:44.623067    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:44.623150    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:44.634204    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:44.634288    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:44.645578    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:44.645664    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:44.657486    9658 logs.go:282] 0 containers: []
	W1209 03:37:44.657498    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:44.657570    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:44.671117    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:44.671135    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:44.671141    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:44.706562    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:44.706572    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:44.721026    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:44.721038    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:44.744928    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:44.744940    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:44.757945    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:44.757956    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:44.762528    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:44.762539    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:44.777931    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:44.777946    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:44.798384    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:44.798397    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:44.815738    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:44.815749    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:44.827253    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:44.827267    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:44.868519    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:44.868550    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:44.904712    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:44.904723    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:44.919711    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:44.919723    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:44.930634    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:44.930651    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:44.942494    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:44.942504    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:44.953677    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:44.953688    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:44.967049    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:44.967062    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:44.978879    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:44.978893    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:44.990363    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:44.990373    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:44.354999    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:44.355469    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:44.386472    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:44.386623    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:44.405885    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:44.405978    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:44.420396    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:44.420488    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:44.431920    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:44.432000    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:44.442292    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:44.442370    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:44.452781    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:44.452865    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:44.471794    9647 logs.go:282] 0 containers: []
	W1209 03:37:44.471805    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:44.471874    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:44.483358    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:44.483376    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:44.483381    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:44.487511    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:44.487519    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:44.502019    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:44.502032    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:44.518216    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:44.518230    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:44.530356    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:44.530367    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:44.542086    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:44.542098    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:44.564698    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:44.564707    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:44.583886    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:44.583896    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:44.600003    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:44.600012    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:44.620299    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:44.620311    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:44.633156    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:44.633168    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:44.648856    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:44.648871    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:44.662363    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:44.662376    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:44.704056    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:44.704069    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:44.742294    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:44.742306    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:44.755176    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:44.755189    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:44.786808    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:44.786828    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:47.307272    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:47.509353    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:52.309530    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:52.309826    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:52.335174    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:52.335318    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:52.352404    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:52.352508    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:52.366920    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:52.367016    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:52.381981    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:52.382064    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:52.393211    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:52.393291    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:52.404294    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:52.404373    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:52.414353    9647 logs.go:282] 0 containers: []
	W1209 03:37:52.414364    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:52.414431    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:52.424821    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:52.424841    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:52.424849    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:52.439711    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:52.439721    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:52.451185    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:52.451195    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:52.462584    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:52.462596    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:52.466982    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:52.466988    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:52.478449    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:52.478461    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:52.490766    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:52.490781    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:52.516452    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:52.516465    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:52.538898    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:52.538909    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:52.554434    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:52.554445    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:52.570919    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:52.570932    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:52.586681    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:52.586692    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:52.611155    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:52.611171    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:52.648534    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:52.648551    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:52.663802    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:52.663814    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:52.685994    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:52.686006    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:52.726432    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:52.726454    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:52.509701    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:52.509811    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:52.523938    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:37:52.524022    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:52.535764    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:37:52.535855    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:52.548301    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:37:52.548383    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:52.559996    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:37:52.560079    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:52.572068    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:37:52.572146    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:52.583925    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:37:52.584005    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:52.595576    9658 logs.go:282] 0 containers: []
	W1209 03:37:52.595588    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:52.595662    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:52.606544    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:37:52.606559    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:37:52.606565    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:37:52.621757    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:37:52.621770    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:37:52.634567    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:37:52.634581    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:37:52.660551    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:37:52.660564    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:37:52.672632    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:52.672644    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:52.716309    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:37:52.716324    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:37:52.731668    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:37:52.731682    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:37:52.743966    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:37:52.743979    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:37:52.760866    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:37:52.760877    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:37:52.780944    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:37:52.780958    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:37:52.800233    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:37:52.800248    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:37:52.819369    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:37:52.819383    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:37:52.830776    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:37:52.830787    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:37:52.865115    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:52.865129    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:52.888169    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:52.888181    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:52.924068    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:37:52.924081    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:37:52.942636    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:37:52.942649    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:37:52.960635    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:37:52.960648    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:52.973576    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:52.973588    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:55.478756    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:55.242220    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:00.481224    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:00.481332    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:38:00.496021    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:38:00.496101    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:38:00.507054    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:38:00.507133    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:38:00.517922    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:38:00.518006    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:38:00.529288    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:38:00.529362    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:38:00.540954    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:38:00.541044    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:38:00.553700    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:38:00.553790    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:38:00.565268    9658 logs.go:282] 0 containers: []
	W1209 03:38:00.565281    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:38:00.565352    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:38:00.579139    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:38:00.579226    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:38:00.579269    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:38:00.620138    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:38:00.620162    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:38:00.633537    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:38:00.633550    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:38:00.646394    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:38:00.646406    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:38:00.689182    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:38:00.689193    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:38:00.704654    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:38:00.704666    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:38:00.245024    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:00.245626    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:38:00.301221    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:38:00.301345    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:38:00.317512    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:38:00.317608    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:38:00.337109    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:38:00.337194    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:38:00.347721    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:38:00.347809    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:38:00.358570    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:38:00.358645    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:38:00.369293    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:38:00.369373    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:38:00.380064    9647 logs.go:282] 0 containers: []
	W1209 03:38:00.380076    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:38:00.380152    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:38:00.391234    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:38:00.391254    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:38:00.391260    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:38:00.405306    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:38:00.405318    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:38:00.430563    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:38:00.430575    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:38:00.442080    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:38:00.442091    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:38:00.454079    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:38:00.454091    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:38:00.471508    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:38:00.471519    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:38:00.482873    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:38:00.482883    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:38:00.524277    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:38:00.524293    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:38:00.528973    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:38:00.528985    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:38:00.544036    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:38:00.544049    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:38:00.559722    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:38:00.559734    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:38:00.584518    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:38:00.584538    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:38:00.611572    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:38:00.611585    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:38:00.661580    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:38:00.661591    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:38:00.675677    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:38:00.675689    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:38:00.687651    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:38:00.687663    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:38:00.707766    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:38:00.707778    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:38:00.723433    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:38:00.723445    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:38:00.740747    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:38:00.740760    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:38:00.752538    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:38:00.752549    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:38:00.764583    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:38:00.764598    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:38:00.779016    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:38:00.779026    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:38:00.793678    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:38:00.793691    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:38:00.808371    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:38:00.808381    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:38:00.827693    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:38:00.827708    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:38:00.839138    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:38:00.839149    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:38:00.843559    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:38:00.843565    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:38:00.884621    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:38:00.884636    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:38:00.896010    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:38:00.896021    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:38:00.907484    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:38:00.907499    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:38:03.432244    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:03.224766    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:08.434496    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:08.434686    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:38:08.447032    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:38:08.447100    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:38:08.462858    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:38:08.462931    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:38:08.474121    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:38:08.474190    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:38:08.490338    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:38:08.490411    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:38:08.501773    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:38:08.501851    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:38:08.513423    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:38:08.513499    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:38:08.524359    9658 logs.go:282] 0 containers: []
	W1209 03:38:08.524371    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:38:08.524433    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:38:08.535833    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:38:08.535846    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:38:08.535850    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:38:08.577875    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:38:08.577889    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:38:08.583016    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:38:08.583028    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:38:08.598508    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:38:08.598521    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:38:08.610914    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:38:08.610928    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:38:08.626065    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:38:08.626081    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:38:08.646732    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:38:08.646749    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:38:08.659258    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:38:08.659269    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:38:08.671294    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:38:08.671307    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:38:08.705373    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:38:08.705385    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:38:08.716430    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:38:08.716444    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:38:08.735539    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:38:08.735549    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:38:08.752899    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:38:08.752914    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:38:08.764324    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:38:08.764337    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:38:08.776991    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:38:08.777007    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:38:08.811526    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:38:08.811566    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:38:08.822897    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:38:08.822908    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:38:08.834537    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:38:08.834549    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:38:08.852123    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:38:08.852134    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:38:08.226988    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:08.227268    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:38:08.249689    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:38:08.249835    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:38:08.265302    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:38:08.265409    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:38:08.281037    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:38:08.281122    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:38:08.291501    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:38:08.291576    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:38:08.307243    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:38:08.307314    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:38:08.318353    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:38:08.318425    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:38:08.330670    9647 logs.go:282] 0 containers: []
	W1209 03:38:08.330682    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:38:08.330749    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:38:08.341640    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:38:08.341658    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:38:08.341664    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:38:08.346396    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:38:08.346405    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:38:08.360612    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:38:08.360623    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:38:08.371765    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:38:08.371775    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:38:08.383243    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:38:08.383253    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:38:08.395054    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:38:08.395067    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:38:08.406856    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:38:08.406867    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:38:08.429503    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:38:08.429517    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:38:08.443618    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:38:08.443628    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:38:08.458186    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:38:08.458201    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:38:08.476755    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:38:08.476769    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:38:08.492209    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:38:08.492223    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:38:08.533754    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:38:08.533770    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:38:08.572603    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:38:08.572615    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:38:08.599384    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:38:08.599399    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:38:08.615292    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:38:08.615309    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:38:08.631593    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:38:08.631606    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:38:11.149993    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:11.373893    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:16.152540    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:16.152616    9647 kubeadm.go:597] duration metric: took 4m4.284523041s to restartPrimaryControlPlane
	W1209 03:38:16.152659    9647 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 03:38:16.152687    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1209 03:38:17.227231    9647 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0745515s)
	I1209 03:38:17.227321    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 03:38:17.232121    9647 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 03:38:17.234925    9647 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 03:38:17.237993    9647 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 03:38:17.237999    9647 kubeadm.go:157] found existing configuration files:
	
	I1209 03:38:17.238027    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/admin.conf
	I1209 03:38:17.240960    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 03:38:17.240994    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 03:38:17.243636    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/kubelet.conf
	I1209 03:38:17.246017    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 03:38:17.246052    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 03:38:17.249139    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/controller-manager.conf
	I1209 03:38:17.251732    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 03:38:17.251761    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 03:38:17.254250    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/scheduler.conf
	I1209 03:38:17.257203    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 03:38:17.257233    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 03:38:17.259927    9647 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 03:38:17.277621    9647 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1209 03:38:17.277659    9647 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 03:38:17.324983    9647 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 03:38:17.325045    9647 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 03:38:17.325102    9647 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 03:38:17.379615    9647 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 03:38:17.384823    9647 out.go:235]   - Generating certificates and keys ...
	I1209 03:38:17.384862    9647 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 03:38:17.384898    9647 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 03:38:17.384951    9647 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 03:38:17.384987    9647 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 03:38:17.385036    9647 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 03:38:17.385071    9647 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 03:38:17.385127    9647 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 03:38:17.385162    9647 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 03:38:17.385206    9647 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 03:38:17.385278    9647 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 03:38:17.385303    9647 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 03:38:17.385338    9647 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 03:38:17.565063    9647 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 03:38:17.660313    9647 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 03:38:17.719712    9647 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 03:38:18.081137    9647 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 03:38:18.110125    9647 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 03:38:18.110524    9647 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 03:38:18.110556    9647 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 03:38:18.198241    9647 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 03:38:16.376444    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:16.376562    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:38:16.389428    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:38:16.389511    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:38:16.401395    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:38:16.401483    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:38:16.414577    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:38:16.414665    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:38:16.426661    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:38:16.426741    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:38:16.438502    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:38:16.438585    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:38:16.449885    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:38:16.449974    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:38:16.461132    9658 logs.go:282] 0 containers: []
	W1209 03:38:16.461147    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:38:16.461223    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:38:16.472759    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:38:16.472777    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:38:16.472784    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:38:16.511127    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:38:16.511139    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:38:16.526440    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:38:16.526454    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:38:16.539444    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:38:16.539457    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:38:16.563190    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:38:16.563207    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:38:16.578639    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:38:16.578653    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:38:16.591121    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:38:16.591135    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:38:16.605303    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:38:16.605318    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:38:16.618489    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:38:16.618500    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:38:16.632639    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:38:16.632652    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:38:16.674646    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:38:16.674666    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:38:16.698418    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:38:16.698433    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:38:16.718022    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:38:16.718041    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:38:16.736820    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:38:16.736837    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:38:16.750054    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:38:16.750070    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:38:16.762681    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:38:16.762695    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:38:16.767646    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:38:16.767658    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:38:16.804069    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:38:16.804091    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:38:16.816006    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:38:16.816020    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:38:19.336501    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:18.201465    9647 out.go:235]   - Booting up control plane ...
	I1209 03:38:18.201511    9647 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 03:38:18.201555    9647 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 03:38:18.201605    9647 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 03:38:18.201642    9647 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 03:38:18.201738    9647 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 03:38:22.703716    9647 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502250 seconds
	I1209 03:38:22.703786    9647 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 03:38:22.707214    9647 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 03:38:23.221832    9647 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 03:38:23.222163    9647 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-416000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 03:38:23.726610    9647 kubeadm.go:310] [bootstrap-token] Using token: ilakkd.dsphbr8h9ubfikit
	I1209 03:38:23.732757    9647 out.go:235]   - Configuring RBAC rules ...
	I1209 03:38:23.732812    9647 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 03:38:23.732855    9647 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 03:38:23.738173    9647 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 03:38:23.738907    9647 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 03:38:23.739595    9647 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 03:38:23.740329    9647 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 03:38:23.743188    9647 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 03:38:23.938940    9647 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 03:38:24.129800    9647 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 03:38:24.130317    9647 kubeadm.go:310] 
	I1209 03:38:24.130346    9647 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 03:38:24.130388    9647 kubeadm.go:310] 
	I1209 03:38:24.130429    9647 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 03:38:24.130435    9647 kubeadm.go:310] 
	I1209 03:38:24.130482    9647 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 03:38:24.130527    9647 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 03:38:24.130556    9647 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 03:38:24.130583    9647 kubeadm.go:310] 
	I1209 03:38:24.130627    9647 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 03:38:24.130629    9647 kubeadm.go:310] 
	I1209 03:38:24.130670    9647 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 03:38:24.130673    9647 kubeadm.go:310] 
	I1209 03:38:24.130721    9647 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 03:38:24.130763    9647 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 03:38:24.130833    9647 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 03:38:24.130836    9647 kubeadm.go:310] 
	I1209 03:38:24.130882    9647 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 03:38:24.130937    9647 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 03:38:24.130942    9647 kubeadm.go:310] 
	I1209 03:38:24.130984    9647 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ilakkd.dsphbr8h9ubfikit \
	I1209 03:38:24.131033    9647 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a782ae2dfd662275a1e7aa9644899b689ef07f45552176e7bc27057154b9dd4 \
	I1209 03:38:24.131042    9647 kubeadm.go:310] 	--control-plane 
	I1209 03:38:24.131044    9647 kubeadm.go:310] 
	I1209 03:38:24.131114    9647 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 03:38:24.131119    9647 kubeadm.go:310] 
	I1209 03:38:24.131161    9647 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ilakkd.dsphbr8h9ubfikit \
	I1209 03:38:24.131265    9647 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a782ae2dfd662275a1e7aa9644899b689ef07f45552176e7bc27057154b9dd4 
	I1209 03:38:24.131321    9647 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 03:38:24.131331    9647 cni.go:84] Creating CNI manager for ""
	I1209 03:38:24.131344    9647 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:38:24.135791    9647 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 03:38:24.142784    9647 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 03:38:24.145945    9647 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 03:38:24.150933    9647 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 03:38:24.151008    9647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 03:38:24.151226    9647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-416000 minikube.k8s.io/updated_at=2024_12_09T03_38_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=stopped-upgrade-416000 minikube.k8s.io/primary=true
	I1209 03:38:24.182745    9647 kubeadm.go:1113] duration metric: took 31.797625ms to wait for elevateKubeSystemPrivileges
	I1209 03:38:24.182785    9647 ops.go:34] apiserver oom_adj: -16
	I1209 03:38:24.189961    9647 kubeadm.go:394] duration metric: took 4m12.335566458s to StartCluster
	I1209 03:38:24.189980    9647 settings.go:142] acquiring lock: {Name:mk9d239bb773df077cf7eb12290ff1e68f296c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:38:24.190158    9647 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:38:24.190545    9647 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/kubeconfig: {Name:mkcab6edf2c02dd56919c96ee93c72d0b668d23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:38:24.191071    9647 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:38:24.191070    9647 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 03:38:24.191120    9647 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-416000"
	I1209 03:38:24.191129    9647 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-416000"
	W1209 03:38:24.191132    9647 addons.go:243] addon storage-provisioner should already be in state true
	I1209 03:38:24.191142    9647 host.go:66] Checking if "stopped-upgrade-416000" exists ...
	I1209 03:38:24.191155    9647 config.go:182] Loaded profile config "stopped-upgrade-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 03:38:24.191160    9647 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-416000"
	I1209 03:38:24.191324    9647 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-416000"
	I1209 03:38:24.192280    9647 retry.go:31] will retry after 1.258603263s: connect: dial unix /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/monitor: connect: connection refused
	I1209 03:38:24.192964    9647 kapi.go:59] client config for stopped-upgrade-416000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/client.key", CAFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102bcb740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 03:38:24.193267    9647 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-416000"
	W1209 03:38:24.193271    9647 addons.go:243] addon default-storageclass should already be in state true
	I1209 03:38:24.193278    9647 host.go:66] Checking if "stopped-upgrade-416000" exists ...
	I1209 03:38:24.193781    9647 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 03:38:24.193785    9647 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 03:38:24.193790    9647 sshutil.go:53] new ssh client: &{IP:localhost Port:60489 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/id_rsa Username:docker}
	I1209 03:38:24.195791    9647 out.go:177] * Verifying Kubernetes components...
	I1209 03:38:24.203782    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:38:24.292997    9647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:38:24.298205    9647 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:38:24.298263    9647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:38:24.302451    9647 api_server.go:72] duration metric: took 111.368958ms to wait for apiserver process to appear ...
	I1209 03:38:24.302461    9647 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:38:24.302467    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:24.362822    9647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 03:38:24.686399    9647 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 03:38:24.686409    9647 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 03:38:25.458307    9647 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:38:24.338603    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:24.338731    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:38:24.350428    9658 logs.go:282] 2 containers: [de33420ab15f 266a6560f67c]
	I1209 03:38:24.350521    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:38:24.362357    9658 logs.go:282] 2 containers: [a6f017ce0c0c 1c740c03f549]
	I1209 03:38:24.362421    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:38:24.374168    9658 logs.go:282] 2 containers: [5e78def2868f f8298f4cf6b7]
	I1209 03:38:24.374251    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:38:24.386174    9658 logs.go:282] 2 containers: [e05712180d03 a42b643cfd15]
	I1209 03:38:24.386257    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:38:24.397553    9658 logs.go:282] 2 containers: [8f7d60907e5f 8c650fdc680b]
	I1209 03:38:24.397634    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:38:24.408959    9658 logs.go:282] 2 containers: [75bb11931733 67a9fa94ff40]
	I1209 03:38:24.409048    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:38:24.420379    9658 logs.go:282] 0 containers: []
	W1209 03:38:24.420390    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:38:24.420461    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:38:24.431521    9658 logs.go:282] 2 containers: [06dbb9ef7790 d3cb70f32269]
	I1209 03:38:24.431538    9658 logs.go:123] Gathering logs for kube-apiserver [266a6560f67c] ...
	I1209 03:38:24.431544    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 266a6560f67c"
	I1209 03:38:24.467681    9658 logs.go:123] Gathering logs for etcd [a6f017ce0c0c] ...
	I1209 03:38:24.467700    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6f017ce0c0c"
	I1209 03:38:24.483794    9658 logs.go:123] Gathering logs for etcd [1c740c03f549] ...
	I1209 03:38:24.483813    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c740c03f549"
	I1209 03:38:24.498537    9658 logs.go:123] Gathering logs for kube-scheduler [a42b643cfd15] ...
	I1209 03:38:24.498558    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a42b643cfd15"
	I1209 03:38:24.518635    9658 logs.go:123] Gathering logs for storage-provisioner [d3cb70f32269] ...
	I1209 03:38:24.518649    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3cb70f32269"
	I1209 03:38:24.537352    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:38:24.537365    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:38:24.578202    9658 logs.go:123] Gathering logs for coredns [5e78def2868f] ...
	I1209 03:38:24.578223    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e78def2868f"
	I1209 03:38:24.590598    9658 logs.go:123] Gathering logs for kube-proxy [8c650fdc680b] ...
	I1209 03:38:24.590610    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c650fdc680b"
	I1209 03:38:24.606950    9658 logs.go:123] Gathering logs for kube-controller-manager [75bb11931733] ...
	I1209 03:38:24.606960    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bb11931733"
	I1209 03:38:24.625229    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:38:24.625242    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:38:24.638677    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:38:24.638688    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:38:24.643556    9658 logs.go:123] Gathering logs for kube-apiserver [de33420ab15f] ...
	I1209 03:38:24.643567    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de33420ab15f"
	I1209 03:38:24.662751    9658 logs.go:123] Gathering logs for coredns [f8298f4cf6b7] ...
	I1209 03:38:24.662768    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8298f4cf6b7"
	I1209 03:38:24.675856    9658 logs.go:123] Gathering logs for kube-proxy [8f7d60907e5f] ...
	I1209 03:38:24.675870    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d60907e5f"
	I1209 03:38:24.689199    9658 logs.go:123] Gathering logs for kube-controller-manager [67a9fa94ff40] ...
	I1209 03:38:24.689208    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67a9fa94ff40"
	I1209 03:38:24.705886    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:38:24.705896    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:38:24.745151    9658 logs.go:123] Gathering logs for storage-provisioner [06dbb9ef7790] ...
	I1209 03:38:24.745165    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06dbb9ef7790"
	I1209 03:38:24.756751    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:38:24.756765    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:38:24.778260    9658 logs.go:123] Gathering logs for kube-scheduler [e05712180d03] ...
	I1209 03:38:24.778274    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e05712180d03"
	I1209 03:38:25.462224    9647 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 03:38:25.462239    9647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 03:38:25.462254    9647 sshutil.go:53] new ssh client: &{IP:localhost Port:60489 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/id_rsa Username:docker}
	I1209 03:38:25.510574    9647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 03:38:27.296357    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:29.304487    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:29.304537    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:32.297853    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:32.297947    9658 kubeadm.go:597] duration metric: took 4m5.165122208s to restartPrimaryControlPlane
	W1209 03:38:32.298006    9658 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 03:38:32.298033    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1209 03:38:33.330595    9658 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.032562584s)
	I1209 03:38:33.330680    9658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 03:38:33.335882    9658 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 03:38:33.338874    9658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 03:38:33.341916    9658 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 03:38:33.341922    9658 kubeadm.go:157] found existing configuration files:
	
	I1209 03:38:33.341950    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/admin.conf
	I1209 03:38:33.344478    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 03:38:33.344512    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 03:38:33.347121    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/kubelet.conf
	I1209 03:38:33.350238    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 03:38:33.350267    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 03:38:33.353556    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/controller-manager.conf
	I1209 03:38:33.355949    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 03:38:33.355978    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 03:38:33.358844    9658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/scheduler.conf
	I1209 03:38:33.361993    9658 kubeadm.go:163] "https://control-plane.minikube.internal:60625" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:60625 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 03:38:33.362023    9658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 03:38:33.364812    9658 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 03:38:33.382754    9658 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1209 03:38:33.382833    9658 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 03:38:33.431291    9658 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 03:38:33.431356    9658 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 03:38:33.431404    9658 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 03:38:33.480282    9658 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 03:38:33.484484    9658 out.go:235]   - Generating certificates and keys ...
	I1209 03:38:33.484522    9658 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 03:38:33.484555    9658 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 03:38:33.484599    9658 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 03:38:33.484633    9658 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 03:38:33.484670    9658 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 03:38:33.484713    9658 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 03:38:33.484756    9658 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 03:38:33.484792    9658 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 03:38:33.484839    9658 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 03:38:33.484877    9658 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 03:38:33.484901    9658 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 03:38:33.484933    9658 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 03:38:33.536458    9658 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 03:38:33.620259    9658 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 03:38:33.701450    9658 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 03:38:33.813437    9658 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 03:38:33.848371    9658 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 03:38:33.848701    9658 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 03:38:33.848730    9658 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 03:38:33.933906    9658 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 03:38:33.938635    9658 out.go:235]   - Booting up control plane ...
	I1209 03:38:33.938688    9658 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 03:38:33.938734    9658 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 03:38:33.938772    9658 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 03:38:33.938819    9658 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 03:38:33.938946    9658 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 03:38:34.305065    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:34.305086    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:38.436275    9658 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502561 seconds
	I1209 03:38:38.436354    9658 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 03:38:38.441325    9658 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 03:38:38.949458    9658 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 03:38:38.949593    9658 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-765000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 03:38:39.454961    9658 kubeadm.go:310] [bootstrap-token] Using token: jfi1wa.uirtef2mabjp664a
	I1209 03:38:39.461702    9658 out.go:235]   - Configuring RBAC rules ...
	I1209 03:38:39.461774    9658 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 03:38:39.461822    9658 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 03:38:39.470774    9658 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 03:38:39.471632    9658 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 03:38:39.472878    9658 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 03:38:39.474917    9658 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 03:38:39.478614    9658 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 03:38:39.684827    9658 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 03:38:39.860285    9658 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 03:38:39.860824    9658 kubeadm.go:310] 
	I1209 03:38:39.860852    9658 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 03:38:39.860855    9658 kubeadm.go:310] 
	I1209 03:38:39.860892    9658 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 03:38:39.860896    9658 kubeadm.go:310] 
	I1209 03:38:39.860907    9658 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 03:38:39.860946    9658 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 03:38:39.860984    9658 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 03:38:39.860986    9658 kubeadm.go:310] 
	I1209 03:38:39.861012    9658 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 03:38:39.861017    9658 kubeadm.go:310] 
	I1209 03:38:39.861044    9658 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 03:38:39.861048    9658 kubeadm.go:310] 
	I1209 03:38:39.861072    9658 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 03:38:39.861110    9658 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 03:38:39.861149    9658 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 03:38:39.861155    9658 kubeadm.go:310] 
	I1209 03:38:39.861218    9658 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 03:38:39.861262    9658 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 03:38:39.861269    9658 kubeadm.go:310] 
	I1209 03:38:39.861326    9658 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jfi1wa.uirtef2mabjp664a \
	I1209 03:38:39.861377    9658 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a782ae2dfd662275a1e7aa9644899b689ef07f45552176e7bc27057154b9dd4 \
	I1209 03:38:39.861388    9658 kubeadm.go:310] 	--control-plane 
	I1209 03:38:39.861392    9658 kubeadm.go:310] 
	I1209 03:38:39.861436    9658 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 03:38:39.861440    9658 kubeadm.go:310] 
	I1209 03:38:39.861480    9658 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jfi1wa.uirtef2mabjp664a \
	I1209 03:38:39.861531    9658 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a782ae2dfd662275a1e7aa9644899b689ef07f45552176e7bc27057154b9dd4 
	I1209 03:38:39.861793    9658 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 03:38:39.861831    9658 cni.go:84] Creating CNI manager for ""
	I1209 03:38:39.861843    9658 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:38:39.867709    9658 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 03:38:39.877715    9658 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 03:38:39.880820    9658 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 03:38:39.886348    9658 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 03:38:39.886403    9658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 03:38:39.886422    9658 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-765000 minikube.k8s.io/updated_at=2024_12_09T03_38_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=running-upgrade-765000 minikube.k8s.io/primary=true
	I1209 03:38:39.931204    9658 ops.go:34] apiserver oom_adj: -16
	I1209 03:38:39.931203    9658 kubeadm.go:1113] duration metric: took 44.848042ms to wait for elevateKubeSystemPrivileges
	I1209 03:38:39.931298    9658 kubeadm.go:394] duration metric: took 4m12.812350083s to StartCluster
	I1209 03:38:39.931314    9658 settings.go:142] acquiring lock: {Name:mk9d239bb773df077cf7eb12290ff1e68f296c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:38:39.931389    9658 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:38:39.931817    9658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/kubeconfig: {Name:mkcab6edf2c02dd56919c96ee93c72d0b668d23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:38:39.932031    9658 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:38:39.932096    9658 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 03:38:39.932132    9658 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-765000"
	I1209 03:38:39.932141    9658 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-765000"
	I1209 03:38:39.932152    9658 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-765000"
	I1209 03:38:39.932143    9658 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-765000"
	W1209 03:38:39.932164    9658 addons.go:243] addon storage-provisioner should already be in state true
	I1209 03:38:39.932175    9658 host.go:66] Checking if "running-upgrade-765000" exists ...
	I1209 03:38:39.932221    9658 config.go:182] Loaded profile config "running-upgrade-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 03:38:39.933172    9658 kapi.go:59] client config for running-upgrade-765000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/running-upgrade-765000/client.key", CAFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10431f740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 03:38:39.933297    9658 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-765000"
	W1209 03:38:39.933302    9658 addons.go:243] addon default-storageclass should already be in state true
	I1209 03:38:39.933309    9658 host.go:66] Checking if "running-upgrade-765000" exists ...
	I1209 03:38:39.936745    9658 out.go:177] * Verifying Kubernetes components...
	I1209 03:38:39.937085    9658 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 03:38:39.939814    9658 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 03:38:39.939821    9658 sshutil.go:53] new ssh client: &{IP:localhost Port:60526 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/running-upgrade-765000/id_rsa Username:docker}
	I1209 03:38:39.942687    9658 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:38:39.946732    9658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:38:39.950738    9658 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 03:38:39.950744    9658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 03:38:39.950750    9658 sshutil.go:53] new ssh client: &{IP:localhost Port:60526 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/running-upgrade-765000/id_rsa Username:docker}
	I1209 03:38:40.043255    9658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:38:40.048742    9658 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:38:40.048800    9658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:38:40.052912    9658 api_server.go:72] duration metric: took 120.872167ms to wait for apiserver process to appear ...
	I1209 03:38:40.052920    9658 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:38:40.052927    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:40.112325    9658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 03:38:40.127019    9658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 03:38:40.448773    9658 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 03:38:40.448785    9658 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 03:38:39.305386    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:39.305408    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:45.054629    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:45.054694    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:44.305849    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:44.305901    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:50.054886    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:50.054921    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:49.306616    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:49.306651    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:54.307535    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:54.307588    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1209 03:38:54.688931    9647 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1209 03:38:54.692988    9647 out.go:177] * Enabled addons: storage-provisioner
	I1209 03:38:55.055042    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:55.055065    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:54.699939    9647 addons.go:510] duration metric: took 30.509574s for enable addons: enabled=[storage-provisioner]
	I1209 03:39:00.055322    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:00.055349    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:59.308675    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:59.308727    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:05.055609    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:05.055632    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:04.310275    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:04.310317    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:10.056091    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:10.056118    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1209 03:39:10.450589    9658 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1209 03:39:10.454868    9658 out.go:177] * Enabled addons: storage-provisioner
	I1209 03:39:10.461762    9658 addons.go:510] duration metric: took 30.530246667s for enable addons: enabled=[storage-provisioner]
	I1209 03:39:09.312244    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:09.312301    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:15.056719    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:15.056766    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:14.314458    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:14.314481    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:20.057625    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:20.057670    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:19.314814    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:19.314845    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:25.058669    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:25.058723    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:24.317010    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:24.317169    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:39:24.345135    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:39:24.345230    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:39:24.357615    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:39:24.357696    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:39:24.368392    9647 logs.go:282] 2 containers: [8119ba0a4b38 f980d379a2f6]
	I1209 03:39:24.368474    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:39:24.378607    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:39:24.378681    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:39:24.389130    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:39:24.389216    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:39:24.399433    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:39:24.399527    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:39:24.409668    9647 logs.go:282] 0 containers: []
	W1209 03:39:24.409678    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:39:24.409739    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:39:24.420337    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:39:24.420352    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:39:24.420357    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:39:24.425276    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:39:24.425283    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:39:24.439849    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:39:24.439863    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:39:24.457730    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:39:24.457741    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:39:24.469007    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:39:24.469017    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:39:24.505683    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:39:24.505783    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:39:24.507551    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:39:24.507557    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:39:24.549731    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:39:24.549743    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:39:24.564254    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:39:24.564267    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:39:24.579727    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:39:24.579741    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:39:24.592101    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:39:24.592111    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:39:24.607340    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:39:24.607350    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:39:24.619640    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:39:24.619653    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:39:24.643743    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:39:24.643761    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:39:24.655681    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:39:24.655704    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:39:24.655730    9647 out.go:270] X Problems detected in kubelet:
	W1209 03:39:24.655736    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:39:24.655739    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:39:24.655742    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:39:24.655745    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:39:30.060073    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:30.060130    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:35.061956    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:35.062023    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:34.658081    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:40.064183    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:40.064327    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:39:40.079761    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:39:40.079843    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:39:40.097562    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:39:40.097644    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:39:40.111777    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:39:40.111861    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:39:40.133474    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:39:40.133566    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:39:40.149576    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:39:40.149662    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:39:40.161112    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:39:40.161193    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:39:40.171378    9658 logs.go:282] 0 containers: []
	W1209 03:39:40.171390    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:39:40.171460    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:39:40.181575    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:39:40.181592    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:39:40.181598    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:39:40.186337    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:39:40.186344    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:39:40.200626    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:39:40.200638    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:39:40.221425    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:39:40.221442    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:39:40.232782    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:39:40.232792    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:39:40.244757    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:39:40.244768    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:39:40.269673    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:39:40.269684    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:39:40.303478    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:39:40.303487    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:39:40.339592    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:39:40.339605    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:39:40.355086    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:39:40.355097    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:39:40.369841    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:39:40.369853    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:39:40.382022    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:39:40.382033    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:39:40.393909    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:39:40.393921    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:39:39.660406    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:39.660637    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:39:39.675388    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:39:39.675490    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:39:39.686852    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:39:39.686920    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:39:39.697271    9647 logs.go:282] 2 containers: [8119ba0a4b38 f980d379a2f6]
	I1209 03:39:39.697338    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:39:39.708371    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:39:39.708454    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:39:39.719137    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:39:39.719221    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:39:39.730750    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:39:39.730829    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:39:39.741850    9647 logs.go:282] 0 containers: []
	W1209 03:39:39.741863    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:39:39.741935    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:39:39.752382    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:39:39.752398    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:39:39.752404    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:39:39.771286    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:39:39.771302    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:39:39.783126    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:39:39.783135    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:39:39.797828    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:39:39.797838    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:39:39.812101    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:39:39.812113    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:39:39.835762    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:39:39.835772    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:39:39.840170    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:39:39.840179    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:39:39.855020    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:39:39.855032    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:39:39.869747    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:39:39.869757    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:39:39.886986    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:39:39.886997    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:39:39.898656    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:39:39.898666    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:39:39.934966    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:39:39.935061    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:39:39.936886    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:39:39.936892    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:39:39.973991    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:39:39.974004    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:39:39.989168    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:39:39.989178    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:39:39.989205    9647 out.go:270] X Problems detected in kubelet:
	W1209 03:39:39.989209    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:39:39.989213    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:39:39.989216    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:39:39.989219    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:39:42.907509    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:47.909591    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:47.909849    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:39:47.935830    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:39:47.935941    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:39:47.950638    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:39:47.950745    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:39:47.967070    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:39:47.967156    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:39:47.979412    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:39:47.979505    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:39:47.989785    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:39:47.989871    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:39:48.000730    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:39:48.000812    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:39:48.011089    9658 logs.go:282] 0 containers: []
	W1209 03:39:48.011104    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:39:48.011173    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:39:48.022572    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:39:48.022589    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:39:48.022595    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:39:48.037429    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:39:48.037442    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:39:48.049994    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:39:48.050008    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:39:48.061537    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:39:48.061550    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:39:48.076895    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:39:48.076908    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:39:48.088751    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:39:48.088761    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:39:48.093890    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:39:48.093900    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:39:48.133178    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:39:48.133189    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:39:48.148060    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:39:48.148070    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:39:48.165660    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:39:48.165670    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:39:48.185455    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:39:48.185466    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:39:48.209111    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:39:48.209122    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:39:48.221345    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:39:48.221354    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:39:49.993143    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:50.758151    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:54.993738    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:54.994034    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:39:55.016330    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:39:55.016461    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:39:55.032400    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:39:55.032497    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:39:55.045264    9647 logs.go:282] 2 containers: [8119ba0a4b38 f980d379a2f6]
	I1209 03:39:55.045343    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:39:55.056465    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:39:55.056546    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:39:55.067001    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:39:55.067080    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:39:55.077788    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:39:55.077866    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:39:55.087656    9647 logs.go:282] 0 containers: []
	W1209 03:39:55.087668    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:39:55.087730    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:39:55.097959    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:39:55.097976    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:39:55.097982    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:39:55.112448    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:39:55.112460    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:39:55.125986    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:39:55.125999    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:39:55.138573    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:39:55.138585    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:39:55.143162    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:39:55.143170    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:39:55.158059    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:39:55.158072    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:39:55.175618    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:39:55.175628    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:39:55.187140    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:39:55.187151    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:39:55.202845    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:39:55.202859    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:39:55.227239    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:39:55.227246    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:39:55.263762    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:39:55.263857    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:39:55.265679    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:39:55.265684    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:39:55.310492    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:39:55.310502    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:39:55.322794    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:39:55.322806    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:39:55.341226    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:39:55.341236    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:39:55.341266    9647 out.go:270] X Problems detected in kubelet:
	W1209 03:39:55.341281    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:39:55.341285    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:39:55.341288    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:39:55.341291    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:39:55.759202    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:55.759389    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:39:55.778100    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:39:55.778217    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:39:55.791657    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:39:55.791757    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:39:55.806177    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:39:55.806259    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:39:55.816606    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:39:55.816679    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:39:55.826970    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:39:55.827046    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:39:55.837793    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:39:55.837869    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:39:55.847932    9658 logs.go:282] 0 containers: []
	W1209 03:39:55.847944    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:39:55.848014    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:39:55.858098    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:39:55.858114    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:39:55.858120    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:39:55.869293    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:39:55.869308    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:39:55.892459    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:39:55.892467    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:39:55.903930    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:39:55.903940    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:39:55.908310    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:39:55.908317    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:39:55.922791    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:39:55.922804    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:39:55.934490    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:39:55.934501    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:39:55.945702    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:39:55.945712    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:39:55.960146    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:39:55.960156    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:39:55.977892    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:39:55.977903    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:39:55.989080    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:39:55.989090    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:39:56.023822    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:39:56.023836    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:39:56.059934    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:39:56.059945    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:39:58.575471    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:03.577760    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:03.578073    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:03.595968    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:03.596072    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:03.610214    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:03.610306    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:03.621994    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:03.622068    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:03.632038    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:03.632126    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:03.642996    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:03.643072    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:03.653712    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:03.653787    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:03.664033    9658 logs.go:282] 0 containers: []
	W1209 03:40:03.664044    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:03.664109    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:03.674585    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:03.674603    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:03.674608    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:03.710046    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:03.710057    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:03.721915    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:03.721929    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:03.736138    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:03.736148    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:03.748930    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:03.748942    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:03.771685    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:03.771698    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:03.795023    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:03.795033    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:03.829274    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:03.829288    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:03.833749    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:03.833758    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:03.848635    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:03.848645    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:03.862895    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:03.862908    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:03.876324    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:03.876335    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:03.888199    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:03.888212    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:05.345216    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:06.403210    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:10.347327    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:10.347562    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:10.367783    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:40:10.367884    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:10.382277    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:40:10.382376    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:10.394264    9647 logs.go:282] 2 containers: [8119ba0a4b38 f980d379a2f6]
	I1209 03:40:10.394346    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:10.405552    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:40:10.405638    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:10.416257    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:40:10.416335    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:10.433429    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:40:10.433510    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:10.443923    9647 logs.go:282] 0 containers: []
	W1209 03:40:10.443943    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:10.444022    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:10.454262    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:40:10.454279    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:10.454285    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:40:10.490525    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:10.490622    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:10.492348    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:10.492354    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:10.529279    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:40:10.529294    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:40:10.543367    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:40:10.543378    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:40:10.554995    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:40:10.555004    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:40:10.570036    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:10.570051    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:10.593911    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:40:10.593922    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:10.607117    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:10.607131    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:10.611658    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:40:10.611667    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:40:10.626011    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:40:10.626024    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:40:10.637579    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:40:10.637595    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:40:10.649506    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:40:10.649516    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:40:10.667409    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:40:10.667421    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:40:10.678901    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:10.678911    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:40:10.678937    9647 out.go:270] X Problems detected in kubelet:
	W1209 03:40:10.678941    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:10.678944    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:10.678947    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:10.678950    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:40:11.405439    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:11.405682    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:11.432030    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:11.432147    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:11.449220    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:11.449312    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:11.462665    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:11.462750    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:11.473740    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:11.473824    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:11.484912    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:11.484997    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:11.495335    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:11.495413    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:11.505525    9658 logs.go:282] 0 containers: []
	W1209 03:40:11.505537    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:11.505607    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:11.515971    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:11.515986    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:11.515991    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:11.530396    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:11.530406    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:11.544353    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:11.544364    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:11.556109    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:11.556119    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:11.570652    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:11.570662    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:11.588461    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:11.588474    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:11.613538    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:11.613548    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:11.624921    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:11.624932    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:11.659170    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:11.659182    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:11.664316    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:11.664324    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:11.676245    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:11.676258    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:11.691096    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:11.691108    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:11.702247    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:11.702257    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:14.239111    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:19.241234    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:19.241413    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:19.259234    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:19.259317    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:19.271474    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:19.271554    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:19.282413    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:19.282495    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:19.293213    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:19.293297    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:19.303700    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:19.303802    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:19.314352    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:19.314424    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:19.324363    9658 logs.go:282] 0 containers: []
	W1209 03:40:19.324375    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:19.324437    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:19.335649    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:19.335667    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:19.335673    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:19.372224    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:19.372235    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:19.376877    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:19.376884    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:19.451600    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:19.451616    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:19.466355    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:19.466365    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:19.485021    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:19.485035    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:19.496658    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:19.496670    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:19.516900    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:19.516915    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:19.530706    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:19.530718    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:19.544845    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:19.544855    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:19.556378    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:19.556389    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:19.569763    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:19.569772    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:19.594637    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:19.594649    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:20.682951    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:22.108658    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:25.685253    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:25.685512    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:25.712516    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:40:25.712610    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:25.727705    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:40:25.727788    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:25.738512    9647 logs.go:282] 2 containers: [8119ba0a4b38 f980d379a2f6]
	I1209 03:40:25.738597    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:25.750297    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:40:25.750376    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:25.762825    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:40:25.762903    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:25.773633    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:40:25.773712    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:25.787898    9647 logs.go:282] 0 containers: []
	W1209 03:40:25.787910    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:25.787978    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:25.801406    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:40:25.801422    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:40:25.801427    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:40:25.846096    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:40:25.846109    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:40:25.857813    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:40:25.857826    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:40:25.877499    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:40:25.877512    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:40:25.895869    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:40:25.895879    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:40:25.907561    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:25.907573    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:40:25.944907    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:25.945009    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:25.946775    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:25.946782    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:25.951040    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:40:25.951047    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:40:25.965688    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:25.965699    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:25.988594    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:40:25.988602    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:25.999606    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:25.999617    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:26.038889    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:40:26.038899    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:40:26.053235    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:40:26.053245    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:40:26.065850    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:26.065862    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:40:26.065891    9647 out.go:270] X Problems detected in kubelet:
	W1209 03:40:26.065900    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:26.065903    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:26.065907    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:26.065912    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:40:27.110790    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:27.111011    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:27.126479    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:27.126583    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:27.138768    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:27.138844    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:27.149751    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:27.149835    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:27.159975    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:27.160059    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:27.170593    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:27.170666    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:27.180786    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:27.180855    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:27.191455    9658 logs.go:282] 0 containers: []
	W1209 03:40:27.191467    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:27.191528    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:27.201672    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:27.201689    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:27.201695    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:27.234860    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:27.234869    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:27.239669    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:27.239676    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:27.253790    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:27.253801    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:27.269106    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:27.269117    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:27.280859    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:27.280871    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:27.305283    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:27.305294    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:27.341758    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:27.341772    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:27.355928    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:27.355939    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:27.367462    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:27.367477    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:27.379134    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:27.379145    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:27.393525    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:27.393535    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:27.411154    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:27.411164    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:29.925267    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:34.927539    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:34.927793    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:34.950806    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:34.950932    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:34.966763    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:34.966844    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:34.979412    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:34.979486    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:34.990680    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:34.990760    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:35.001025    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:35.001107    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:35.011695    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:35.011774    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:35.021679    9658 logs.go:282] 0 containers: []
	W1209 03:40:35.021691    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:35.021760    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:35.032051    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:35.032067    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:35.032073    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:35.065307    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:35.065316    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:35.102982    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:35.102993    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:35.122896    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:35.122908    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:35.135000    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:35.135011    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:35.147087    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:35.147097    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:35.173120    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:35.173128    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:35.186106    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:35.186115    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:35.190993    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:35.191000    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:35.204309    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:35.204320    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:35.215581    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:35.215596    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:35.230741    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:35.230752    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:35.248040    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:35.248050    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:36.069892    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:37.761793    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:41.072179    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:41.072450    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:41.096138    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:40:41.096262    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:41.112152    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:40:41.112247    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:41.125489    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:40:41.125579    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:41.136747    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:40:41.136827    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:41.146725    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:40:41.146797    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:41.157659    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:40:41.157740    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:41.167763    9647 logs.go:282] 0 containers: []
	W1209 03:40:41.167775    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:41.167850    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:41.178154    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:40:41.178171    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:40:41.178177    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:40:41.189768    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:40:41.189782    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:40:41.204160    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:40:41.204171    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:40:41.218346    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:40:41.218359    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:40:41.240032    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:41.240044    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:41.266167    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:40:41.266177    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:41.277443    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:41.277453    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:41.281475    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:41.281484    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:41.318807    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:40:41.318819    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:40:41.334860    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:40:41.334875    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:40:41.346628    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:40:41.346640    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:40:41.358745    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:41.358760    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:40:41.393746    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:41.393840    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:41.395563    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:40:41.395568    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:40:41.406596    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:40:41.406610    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:40:41.418007    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:40:41.418020    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:40:41.432991    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:41.433003    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:40:41.433027    9647 out.go:270] X Problems detected in kubelet:
	W1209 03:40:41.433031    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:41.433034    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:41.433037    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:41.433040    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:40:42.763967    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:42.764163    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:42.778043    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:42.778129    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:42.789854    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:42.789931    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:42.800571    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:42.800638    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:42.810785    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:42.810854    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:42.821189    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:42.821270    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:42.831608    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:42.831675    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:42.841672    9658 logs.go:282] 0 containers: []
	W1209 03:40:42.841682    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:42.841740    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:42.851621    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:42.851636    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:42.851642    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:42.885348    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:42.885359    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:42.890407    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:42.890416    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:42.907148    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:42.907158    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:42.923308    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:42.923319    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:42.940079    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:42.940089    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:42.963162    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:42.963170    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:42.976719    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:42.976731    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:43.014872    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:43.014885    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:43.029053    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:43.029064    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:43.040506    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:43.040520    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:43.055392    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:43.055404    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:43.068450    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:43.068459    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:45.582146    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:50.584013    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:50.584199    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:50.602070    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:50.602182    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:50.615646    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:50.615740    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:50.627144    9658 logs.go:282] 2 containers: [57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:50.627225    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:50.637108    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:50.637185    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:50.647435    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:50.647510    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:50.657778    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:50.657849    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:50.668471    9658 logs.go:282] 0 containers: []
	W1209 03:40:50.668488    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:50.668559    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:50.679584    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:50.679600    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:50.679606    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:50.690768    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:50.690779    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:50.695282    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:50.695289    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:51.437092    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:50.731490    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:50.731503    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:50.745637    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:50.745647    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:50.758198    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:50.758210    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:50.773965    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:50.773978    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:40:50.791251    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:50.791261    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:50.820387    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:50.820409    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:50.870856    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:50.870875    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:50.895065    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:50.895082    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:50.917270    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:50.917283    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:50.940989    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:50.941002    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:53.455291    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:56.439628    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:56.439854    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:56.458437    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:40:56.458551    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:56.472641    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:40:56.472732    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:56.485319    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:40:56.485409    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:56.495878    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:40:56.495963    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:56.506725    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:40:56.506795    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:56.517362    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:40:56.517439    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:56.528227    9647 logs.go:282] 0 containers: []
	W1209 03:40:56.528241    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:56.528313    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:56.538892    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:40:56.538908    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:56.538915    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:40:56.574576    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:56.574675    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:56.576396    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:56.576404    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:56.580846    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:40:56.580853    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:40:56.592440    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:40:56.592454    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:40:56.607114    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:56.607127    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:56.633156    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:40:56.633169    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:56.645218    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:40:56.645228    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:40:56.659596    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:40:56.659607    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:40:56.671320    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:40:56.671332    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:40:56.687429    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:40:56.687439    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:40:56.703220    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:40:56.703231    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:40:56.724546    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:40:56.724560    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:40:56.738961    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:40:56.738972    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:40:56.750873    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:56.750884    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:56.787062    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:40:56.787072    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:40:56.798798    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:56.798808    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:40:56.798833    9647 out.go:270] X Problems detected in kubelet:
	W1209 03:40:56.798837    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:56.798842    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:56.798852    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:56.798855    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:40:58.456417    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:58.456640    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:58.476916    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:40:58.477022    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:58.491602    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:40:58.491690    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:58.504044    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:40:58.504128    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:58.515261    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:40:58.515343    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:58.525445    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:40:58.525529    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:58.540621    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:40:58.540695    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:58.554149    9658 logs.go:282] 0 containers: []
	W1209 03:40:58.554162    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:58.554231    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:58.567498    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:40:58.567515    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:40:58.567522    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:40:58.584548    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:40:58.584560    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:40:58.596341    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:40:58.596354    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:40:58.608159    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:40:58.608171    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:40:58.619438    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:58.619450    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:58.645141    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:40:58.645152    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:58.657221    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:40:58.657232    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:40:58.669566    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:58.669577    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:40:58.704914    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:58.704925    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:58.709793    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:58.709801    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:58.744345    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:40:58.744359    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:40:58.758431    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:40:58.758442    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:40:58.770062    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:40:58.770075    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:40:58.787827    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:40:58.787838    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:40:58.799337    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:40:58.799350    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:01.318630    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:06.801493    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:06.321013    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:06.321214    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:06.340618    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:06.340730    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:06.355130    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:06.355221    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:06.367710    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:06.367803    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:06.378906    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:06.378987    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:06.395431    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:06.395513    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:06.411012    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:06.411092    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:06.421406    9658 logs.go:282] 0 containers: []
	W1209 03:41:06.421422    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:06.421488    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:06.432142    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:06.432161    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:06.432166    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:06.444105    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:06.444115    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:06.479828    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:06.479836    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:06.488070    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:06.488083    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:06.505910    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:06.505920    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:06.524923    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:06.524936    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:06.540024    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:06.540034    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:06.554948    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:06.554961    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:06.567559    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:06.567572    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:06.579595    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:06.579607    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:06.593441    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:06.593453    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:06.605573    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:06.605586    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:06.617279    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:06.617290    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:06.654589    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:06.654600    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:06.666567    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:06.666577    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:09.193343    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:11.801646    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:11.801863    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:11.814987    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:41:11.815067    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:11.826474    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:41:11.826553    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:11.837599    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:41:11.837674    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:11.847931    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:41:11.848013    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:11.863123    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:41:11.863200    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:11.877802    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:41:11.877874    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:11.888058    9647 logs.go:282] 0 containers: []
	W1209 03:41:11.888070    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:11.888136    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:11.898493    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:41:11.898510    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:41:11.898515    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:11.917680    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:41:11.917693    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:41:11.931551    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:41:11.931564    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:41:11.943032    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:41:11.943046    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:41:11.957727    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:41:11.957742    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:41:11.974778    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:11.974788    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:11.999149    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:41:11.999158    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:41:12.018150    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:41:12.018164    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:41:12.030152    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:41:12.030163    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:41:12.042034    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:41:12.042047    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:41:12.060015    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:41:12.060027    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:41:12.071663    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:12.071677    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:41:12.108347    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:12.108447    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:12.110219    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:12.110225    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:12.115276    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:12.115284    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:12.155088    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:41:12.155099    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:41:12.167797    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:12.167812    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:41:12.167841    9647 out.go:270] X Problems detected in kubelet:
	W1209 03:41:12.167847    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:12.167852    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:12.167872    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:12.167877    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:41:14.195659    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:14.195830    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:14.208281    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:14.208354    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:14.229320    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:14.229412    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:14.240342    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:14.240427    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:14.250852    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:14.250931    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:14.261182    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:14.261267    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:14.271637    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:14.271713    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:14.282201    9658 logs.go:282] 0 containers: []
	W1209 03:41:14.282213    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:14.282281    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:14.292303    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:14.292320    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:14.292325    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:14.332112    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:14.332124    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:14.344202    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:14.344214    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:14.355672    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:14.355683    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:14.373253    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:14.373265    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:14.397755    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:14.397762    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:14.433122    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:14.433134    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:14.444972    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:14.444983    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:14.459684    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:14.459698    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:14.471643    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:14.471654    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:14.476567    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:14.476575    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:14.493016    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:14.493027    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:14.507249    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:14.507264    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:14.522468    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:14.522479    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:14.537834    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:14.537849    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:17.051087    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:22.171792    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:22.052033    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:22.052321    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:22.077926    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:22.078049    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:22.095550    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:22.095635    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:22.114750    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:22.114836    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:22.125608    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:22.125691    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:22.136184    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:22.136265    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:22.147305    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:22.147389    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:22.157097    9658 logs.go:282] 0 containers: []
	W1209 03:41:22.157107    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:22.157174    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:22.167682    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:22.167701    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:22.167707    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:22.181817    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:22.181828    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:22.194070    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:22.194081    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:22.205889    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:22.205902    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:22.225769    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:22.225779    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:22.237320    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:22.237331    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:22.270458    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:22.270470    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:22.274961    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:22.274970    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:22.289156    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:22.289166    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:22.313635    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:22.313642    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:22.347695    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:22.347706    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:22.366765    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:22.366776    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:22.379097    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:22.379109    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:22.391155    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:22.391166    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:22.402725    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:22.402737    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:24.922024    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:27.173888    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:27.174021    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:27.185881    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:41:27.185969    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:27.196698    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:41:27.196771    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:27.207078    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:41:27.207159    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:27.217665    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:41:27.217733    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:27.228097    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:41:27.228164    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:27.242732    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:41:27.242814    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:27.252899    9647 logs.go:282] 0 containers: []
	W1209 03:41:27.252911    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:27.252982    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:27.263366    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:41:27.263382    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:41:27.263388    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:27.275346    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:27.275358    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:27.280171    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:27.280180    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:27.313531    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:41:27.313542    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:41:27.324944    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:41:27.324956    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:41:27.336701    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:41:27.336711    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:41:27.347828    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:27.347842    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:27.372184    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:41:27.372196    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:41:27.384734    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:41:27.384745    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:41:27.399179    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:27.399191    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:41:27.436239    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:27.436332    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:27.438059    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:41:27.438063    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:41:27.452096    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:41:27.452108    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:41:27.463610    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:41:27.463621    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:41:27.478554    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:41:27.478564    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:41:27.490336    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:41:27.490345    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:41:27.507557    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:27.507567    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:41:27.507588    9647 out.go:270] X Problems detected in kubelet:
	W1209 03:41:27.507592    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:27.507596    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:27.507618    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:27.507624    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:41:29.924247    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:29.924463    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:29.942909    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:29.943015    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:29.956858    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:29.956946    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:29.969234    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:29.969319    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:29.979760    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:29.979842    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:29.990728    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:29.990814    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:30.001935    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:30.002017    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:30.016730    9658 logs.go:282] 0 containers: []
	W1209 03:41:30.016741    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:30.016813    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:30.028078    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:30.028101    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:30.028107    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:30.040403    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:30.040417    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:30.055077    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:30.055091    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:30.067357    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:30.067368    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:30.078764    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:30.078779    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:30.119701    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:30.119712    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:30.131753    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:30.131766    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:30.143842    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:30.143854    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:30.148837    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:30.148844    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:30.163445    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:30.163458    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:30.179821    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:30.179832    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:30.191754    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:30.191765    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:30.203253    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:30.203263    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:30.220611    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:30.220620    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:30.245404    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:30.245415    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:32.782429    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:37.511592    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:37.784629    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:37.784856    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:37.809184    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:37.809295    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:37.822926    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:37.823008    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:37.835187    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:37.835270    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:37.852555    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:37.852646    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:37.874139    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:37.874226    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:37.885412    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:37.885491    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:37.895786    9658 logs.go:282] 0 containers: []
	W1209 03:41:37.895800    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:37.895862    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:37.906839    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:37.906856    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:37.906862    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:37.921282    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:37.921292    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:37.932490    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:37.932503    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:37.956861    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:37.956872    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:37.961237    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:37.961246    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:37.975698    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:37.975711    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:37.987691    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:37.987702    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:37.999751    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:37.999762    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:38.017871    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:38.017882    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:38.054571    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:38.054588    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:38.067001    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:38.067011    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:38.103153    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:38.103166    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:38.118266    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:38.118280    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:38.133407    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:38.133421    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:38.150684    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:38.150698    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:40.664782    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:42.513849    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:42.514152    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:42.539617    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:41:42.539761    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:42.556895    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:41:42.557009    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:42.570222    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:41:42.570311    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:42.581439    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:41:42.581518    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:42.612150    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:41:42.612234    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:42.626716    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:41:42.626793    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:42.637185    9647 logs.go:282] 0 containers: []
	W1209 03:41:42.637197    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:42.637264    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:42.647377    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:41:42.647397    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:42.647403    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:42.652079    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:41:42.652091    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:41:42.664040    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:41:42.664052    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:41:42.676208    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:41:42.676219    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:41:42.688022    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:41:42.688033    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:41:42.711739    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:41:42.711750    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:41:42.723399    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:42.723413    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:41:42.758020    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:42.758115    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:42.759939    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:41:42.759945    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:41:42.774012    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:41:42.774024    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:41:42.786600    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:42.786614    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:45.666962    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:45.667211    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:45.691898    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:45.692040    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:42.811937    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:42.811946    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:42.846300    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:41:42.846311    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:42.858652    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:41:42.858665    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:41:42.874328    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:41:42.874342    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:41:42.889380    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:41:42.889391    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:41:42.908313    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:42.908323    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:41:42.908350    9647 out.go:270] X Problems detected in kubelet:
	W1209 03:41:42.908353    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:42.908357    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:42.908361    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:42.908364    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:41:45.707551    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:45.707641    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:45.720577    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:45.720865    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:45.732583    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:45.732670    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:45.742875    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:45.742961    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:45.755813    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:45.755902    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:45.765822    9658 logs.go:282] 0 containers: []
	W1209 03:41:45.765837    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:45.765912    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:45.776169    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:45.776187    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:45.776194    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:45.788416    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:45.788426    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:45.823213    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:45.823222    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:45.842070    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:45.842079    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:45.857023    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:45.857033    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:45.868853    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:45.868864    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:45.908653    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:45.908668    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:45.920579    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:45.920588    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:45.944230    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:45.944238    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:45.948942    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:45.948948    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:45.962930    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:45.962943    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:45.974875    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:45.974886    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:45.992598    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:45.992611    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:46.004481    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:46.004491    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:46.016396    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:46.016407    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:48.529826    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:53.532122    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:53.532323    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:53.550847    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:41:53.550945    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:53.566747    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:41:53.566826    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:53.578318    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:41:53.578398    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:53.593531    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:41:53.593613    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:53.605966    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:41:53.606033    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:53.617190    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:41:53.617268    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:53.627642    9658 logs.go:282] 0 containers: []
	W1209 03:41:53.627654    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:53.627724    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:53.638613    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:41:53.638629    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:53.638635    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:53.643350    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:53.643357    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:53.682446    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:41:53.682461    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:41:53.697365    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:41:53.697377    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:41:53.709958    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:41:53.709970    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:41:53.725357    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:41:53.725369    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:41:53.736989    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:41:53.736998    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:41:53.750504    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:41:53.750516    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:41:53.769162    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:41:53.769176    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:41:53.781044    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:41:53.781055    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:41:53.795642    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:41:53.795652    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:41:53.807316    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:53.807326    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:41:53.841375    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:41:53.841388    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:41:53.858924    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:53.858937    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:53.883495    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:41:53.883505    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:52.912269    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:56.397829    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:57.914507    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:57.914743    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:57.937587    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:41:57.937725    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:57.954540    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:41:57.954637    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:57.967631    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:41:57.967722    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:57.980192    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:41:57.980268    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:57.990760    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:41:57.990835    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:58.001786    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:41:58.001858    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:58.019022    9647 logs.go:282] 0 containers: []
	W1209 03:41:58.019036    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:58.019099    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:58.029667    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:41:58.029685    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:41:58.029691    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:41:58.041436    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:41:58.041448    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:41:58.057188    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:41:58.057197    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:41:58.074611    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:58.074622    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:58.079495    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:41:58.079501    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:41:58.090907    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:41:58.090918    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:58.102577    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:58.102587    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:41:58.140142    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:58.140241    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:58.142047    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:41:58.142057    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:41:58.155539    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:41:58.155550    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:41:58.169467    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:41:58.169480    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:41:58.181424    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:41:58.181437    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:41:58.196255    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:41:58.196266    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:41:58.207987    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:58.208001    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:58.242659    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:41:58.242671    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:41:58.257689    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:58.257702    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:58.282003    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:58.282013    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:41:58.282036    9647 out.go:270] X Problems detected in kubelet:
	W1209 03:41:58.282040    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:58.282043    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:58.282049    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:58.282051    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:42:01.400339    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:01.400472    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:42:01.411890    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:42:01.411961    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:42:01.422472    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:42:01.422557    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:42:01.433193    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:42:01.433273    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:42:01.443789    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:42:01.443873    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:42:01.455087    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:42:01.455168    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:42:01.465808    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:42:01.465891    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:42:01.476440    9658 logs.go:282] 0 containers: []
	W1209 03:42:01.476452    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:42:01.476525    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:42:01.499799    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:42:01.499818    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:42:01.499826    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:42:01.514468    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:42:01.514480    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:42:01.526361    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:42:01.526374    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:42:01.537584    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:42:01.537597    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:42:01.564223    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:42:01.564240    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:42:01.600505    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:42:01.600512    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:42:01.605282    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:42:01.605290    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:42:01.617248    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:42:01.617262    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:42:01.629337    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:42:01.629348    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:42:01.643939    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:42:01.643951    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:42:01.655378    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:42:01.655389    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:42:01.673032    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:42:01.673042    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:42:01.710238    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:42:01.710252    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:42:01.725833    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:42:01.725846    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:42:01.741889    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:42:01.741900    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:42:04.258045    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:09.259300    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:09.259409    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:42:09.270546    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:42:09.270654    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:42:09.280844    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:42:09.280920    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:42:09.292469    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:42:09.292557    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:42:09.303216    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:42:09.303295    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:42:09.313809    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:42:09.313889    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:42:09.324829    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:42:09.324909    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:42:09.335531    9658 logs.go:282] 0 containers: []
	W1209 03:42:09.335544    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:42:09.335617    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:42:09.346372    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:42:09.346387    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:42:09.346392    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:42:09.358506    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:42:09.358517    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:42:09.373476    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:42:09.373486    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:42:09.384899    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:42:09.384908    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:42:09.422487    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:42:09.422498    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:42:09.434143    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:42:09.434153    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:42:09.448263    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:42:09.448273    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:42:09.471445    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:42:09.471453    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:42:09.504569    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:42:09.504576    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:42:09.509086    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:42:09.509095    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:42:09.521268    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:42:09.521279    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:42:09.539107    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:42:09.539117    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:42:09.550562    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:42:09.550572    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:42:09.563933    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:42:09.563943    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:42:09.578380    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:42:09.578392    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:42:08.285440    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:12.092170    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:13.287705    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:13.287896    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:42:13.301952    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:42:13.302046    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:42:13.313344    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:42:13.313420    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:42:13.323935    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:42:13.324016    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:42:13.334690    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:42:13.334763    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:42:13.346288    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:42:13.346364    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:42:13.356617    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:42:13.356695    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:42:13.366925    9647 logs.go:282] 0 containers: []
	W1209 03:42:13.366939    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:42:13.367001    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:42:13.381744    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:42:13.381765    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:42:13.381793    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:42:13.418081    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:42:13.418175    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:42:13.419885    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:42:13.419890    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:42:13.423839    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:42:13.423847    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:42:13.447655    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:42:13.447672    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:42:13.465274    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:42:13.465284    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:42:13.485750    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:42:13.485762    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:42:13.497525    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:42:13.497537    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:42:13.534406    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:42:13.534419    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:42:13.549345    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:42:13.549358    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:42:13.567132    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:42:13.567143    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:42:13.584774    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:42:13.584788    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:42:13.596202    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:42:13.596212    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:42:13.622375    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:42:13.622384    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:42:13.634945    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:42:13.634958    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:42:13.647275    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:42:13.647290    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:42:13.666793    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:42:13.666806    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:42:13.666833    9647 out.go:270] X Problems detected in kubelet:
	W1209 03:42:13.666837    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:42:13.666840    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:42:13.666844    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:42:13.666846    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:42:17.094505    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:17.094847    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:42:17.127981    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:42:17.128083    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:42:17.143115    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:42:17.143195    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:42:17.156207    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:42:17.156290    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:42:17.166668    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:42:17.166747    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:42:17.177340    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:42:17.177411    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:42:17.187883    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:42:17.187952    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:42:17.197849    9658 logs.go:282] 0 containers: []
	W1209 03:42:17.197859    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:42:17.197921    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:42:17.208626    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:42:17.208643    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:42:17.208649    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:42:17.242399    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:42:17.242410    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:42:17.276553    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:42:17.276563    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:42:17.288452    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:42:17.288462    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:42:17.309932    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:42:17.309942    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:42:17.321526    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:42:17.321536    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:42:17.345264    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:42:17.345273    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:42:17.360247    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:42:17.360256    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:42:17.374465    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:42:17.374475    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:42:17.386199    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:42:17.386210    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:42:17.400040    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:42:17.400051    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:42:17.412251    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:42:17.412261    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:42:17.424248    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:42:17.424260    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:42:17.436079    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:42:17.436088    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:42:17.440683    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:42:17.440690    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:42:19.953199    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:24.954110    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:24.954298    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:42:24.970654    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:42:24.970760    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:42:24.983927    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:42:24.984020    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:42:24.995377    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:42:24.995461    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:42:25.006178    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:42:25.006259    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:42:25.017231    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:42:25.017316    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:42:25.031897    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:42:25.031974    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:42:25.042017    9658 logs.go:282] 0 containers: []
	W1209 03:42:25.042029    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:42:25.042105    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:42:25.056489    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:42:25.056511    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:42:25.056518    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:42:25.092890    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:42:25.092904    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:42:25.108012    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:42:25.108023    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:42:25.130966    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:42:25.130977    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:42:25.142982    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:42:25.142997    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:42:25.157629    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:42:25.157640    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:42:25.175046    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:42:25.175056    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:42:25.191082    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:42:25.191092    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:42:25.195702    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:42:25.195711    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:42:25.230002    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:42:25.230015    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:42:25.241391    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:42:25.241410    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:42:25.253723    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:42:25.253736    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:42:25.268357    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:42:25.268370    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:42:25.279532    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:42:25.279542    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:42:25.291375    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:42:25.291385    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:42:23.670827    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:28.671571    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:28.674759    9647 out.go:201] 
	W1209 03:42:28.678767    9647 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1209 03:42:28.678776    9647 out.go:270] * 
	W1209 03:42:28.679763    9647 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:42:28.691818    9647 out.go:201] 
	I1209 03:42:27.805150    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:32.807390    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:32.807637    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:42:32.828329    9658 logs.go:282] 1 containers: [ed057c2187c1]
	I1209 03:42:32.828454    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:42:32.843716    9658 logs.go:282] 1 containers: [53660a58d31a]
	I1209 03:42:32.843802    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:42:32.857354    9658 logs.go:282] 4 containers: [81413c2ebd6f 181736ff162b 57b5cacc1bf2 d649aaf9ab40]
	I1209 03:42:32.857446    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:42:32.868706    9658 logs.go:282] 1 containers: [647c80f1a4ee]
	I1209 03:42:32.868779    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:42:32.880212    9658 logs.go:282] 1 containers: [4af6f6464df0]
	I1209 03:42:32.880305    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:42:32.891318    9658 logs.go:282] 1 containers: [6e8792ff61dd]
	I1209 03:42:32.891412    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:42:32.902346    9658 logs.go:282] 0 containers: []
	W1209 03:42:32.902357    9658 logs.go:284] No container was found matching "kindnet"
	I1209 03:42:32.902423    9658 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:42:32.913232    9658 logs.go:282] 1 containers: [d7b0f32df7bb]
	I1209 03:42:32.913254    9658 logs.go:123] Gathering logs for etcd [53660a58d31a] ...
	I1209 03:42:32.913259    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53660a58d31a"
	I1209 03:42:32.933724    9658 logs.go:123] Gathering logs for coredns [81413c2ebd6f] ...
	I1209 03:42:32.933741    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81413c2ebd6f"
	I1209 03:42:32.945510    9658 logs.go:123] Gathering logs for kube-proxy [4af6f6464df0] ...
	I1209 03:42:32.945521    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af6f6464df0"
	I1209 03:42:32.957549    9658 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:42:32.957563    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:42:32.993612    9658 logs.go:123] Gathering logs for kube-scheduler [647c80f1a4ee] ...
	I1209 03:42:32.993625    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 647c80f1a4ee"
	I1209 03:42:33.009178    9658 logs.go:123] Gathering logs for kube-controller-manager [6e8792ff61dd] ...
	I1209 03:42:33.009193    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e8792ff61dd"
	I1209 03:42:33.026850    9658 logs.go:123] Gathering logs for storage-provisioner [d7b0f32df7bb] ...
	I1209 03:42:33.026860    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b0f32df7bb"
	I1209 03:42:33.038195    9658 logs.go:123] Gathering logs for kubelet ...
	I1209 03:42:33.038207    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:42:33.072729    9658 logs.go:123] Gathering logs for coredns [181736ff162b] ...
	I1209 03:42:33.072738    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 181736ff162b"
	I1209 03:42:33.087091    9658 logs.go:123] Gathering logs for coredns [d649aaf9ab40] ...
	I1209 03:42:33.087102    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d649aaf9ab40"
	I1209 03:42:33.101101    9658 logs.go:123] Gathering logs for dmesg ...
	I1209 03:42:33.101115    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:42:33.105936    9658 logs.go:123] Gathering logs for coredns [57b5cacc1bf2] ...
	I1209 03:42:33.105943    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57b5cacc1bf2"
	I1209 03:42:33.117706    9658 logs.go:123] Gathering logs for Docker ...
	I1209 03:42:33.117718    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:42:33.142676    9658 logs.go:123] Gathering logs for container status ...
	I1209 03:42:33.142687    9658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:42:33.154800    9658 logs.go:123] Gathering logs for kube-apiserver [ed057c2187c1] ...
	I1209 03:42:33.154814    9658 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed057c2187c1"
	I1209 03:42:35.671522    9658 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:40.673637    9658 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:40.677674    9658 out.go:201] 
	W1209 03:42:40.681724    9658 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1209 03:42:40.681729    9658 out.go:270] * 
	W1209 03:42:40.682190    9658 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:42:40.696635    9658 out.go:201] 
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-12-09 11:33:24 UTC, ends at Mon 2024-12-09 11:42:56 UTC. --
	Dec 09 11:42:41 running-upgrade-765000 dockerd[4432]: time="2024-12-09T11:42:41.091645553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 09 11:42:41 running-upgrade-765000 dockerd[4432]: time="2024-12-09T11:42:41.091690343Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b2c70a0d46caec4ca0bfd8c29653a619f9749e947060b143c22abd1ec787574c pid=20541 runtime=io.containerd.runc.v2
	Dec 09 11:42:41 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:41Z" level=error msg="ContainerStats resp: {0x400008f980 linux}"
	Dec 09 11:42:41 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:41Z" level=error msg="ContainerStats resp: {0x4000633f40 linux}"
	Dec 09 11:42:42 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:42Z" level=error msg="ContainerStats resp: {0x4000808940 linux}"
	Dec 09 11:42:42 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:42Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 09 11:42:43 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:43Z" level=error msg="ContainerStats resp: {0x40009940c0 linux}"
	Dec 09 11:42:43 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:43Z" level=error msg="ContainerStats resp: {0x4000809800 linux}"
	Dec 09 11:42:43 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:43Z" level=error msg="ContainerStats resp: {0x4000809c40 linux}"
	Dec 09 11:42:43 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:43Z" level=error msg="ContainerStats resp: {0x4000809d80 linux}"
	Dec 09 11:42:43 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:43Z" level=error msg="ContainerStats resp: {0x4000995240 linux}"
	Dec 09 11:42:43 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:43Z" level=error msg="ContainerStats resp: {0x4000995740 linux}"
	Dec 09 11:42:43 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:43Z" level=error msg="ContainerStats resp: {0x4000995bc0 linux}"
	Dec 09 11:42:47 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:47Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 09 11:42:52 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:52Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 09 11:42:53 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:53Z" level=error msg="ContainerStats resp: {0x4000939ac0 linux}"
	Dec 09 11:42:53 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:53Z" level=error msg="ContainerStats resp: {0x400008f640 linux}"
	Dec 09 11:42:54 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:54Z" level=error msg="ContainerStats resp: {0x4000833100 linux}"
	Dec 09 11:42:55 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:55Z" level=error msg="ContainerStats resp: {0x400035a340 linux}"
	Dec 09 11:42:55 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:55Z" level=error msg="ContainerStats resp: {0x400035a540 linux}"
	Dec 09 11:42:55 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:55Z" level=error msg="ContainerStats resp: {0x40001ead40 linux}"
	Dec 09 11:42:55 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:55Z" level=error msg="ContainerStats resp: {0x40001eb300 linux}"
	Dec 09 11:42:55 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:55Z" level=error msg="ContainerStats resp: {0x400035a540 linux}"
	Dec 09 11:42:55 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:55Z" level=error msg="ContainerStats resp: {0x40001ea640 linux}"
	Dec 09 11:42:55 running-upgrade-765000 cri-dockerd[4263]: time="2024-12-09T11:42:55Z" level=error msg="ContainerStats resp: {0x400007e380 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	b2c70a0d46cae       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   3d39d746414e8
	ec26716782b5c       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   11565572f23e3
	81413c2ebd6fc       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   11565572f23e3
	181736ff162be       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   3d39d746414e8
	d7b0f32df7bb0       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   ae18a732b815f
	4af6f6464df00       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   c948c349b9ef1
	53660a58d31a3       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   fa2f69ba80548
	6e8792ff61dda       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   7a716d5933254
	ed057c2187c17       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   42d4908f552f1
	647c80f1a4ee1       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   e9e4b600f0ca8
	
	
	==> coredns [181736ff162b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1785501752803145058.965492108890747076. HINFO: read udp 10.244.0.2:49993->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1785501752803145058.965492108890747076. HINFO: read udp 10.244.0.2:55968->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1785501752803145058.965492108890747076. HINFO: read udp 10.244.0.2:54583->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1785501752803145058.965492108890747076. HINFO: read udp 10.244.0.2:45759->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1785501752803145058.965492108890747076. HINFO: read udp 10.244.0.2:48022->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [81413c2ebd6f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6857252564037092438.2044604181095276168. HINFO: read udp 10.244.0.3:56900->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6857252564037092438.2044604181095276168. HINFO: read udp 10.244.0.3:47018->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6857252564037092438.2044604181095276168. HINFO: read udp 10.244.0.3:51832->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6857252564037092438.2044604181095276168. HINFO: read udp 10.244.0.3:56296->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6857252564037092438.2044604181095276168. HINFO: read udp 10.244.0.3:47881->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6857252564037092438.2044604181095276168. HINFO: read udp 10.244.0.3:45122->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6857252564037092438.2044604181095276168. HINFO: read udp 10.244.0.3:33617->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6857252564037092438.2044604181095276168. HINFO: read udp 10.244.0.3:49423->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6857252564037092438.2044604181095276168. HINFO: read udp 10.244.0.3:37657->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6857252564037092438.2044604181095276168. HINFO: read udp 10.244.0.3:54282->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b2c70a0d46ca] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5782089554039646361.1669043236595803342. HINFO: read udp 10.244.0.2:40311->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5782089554039646361.1669043236595803342. HINFO: read udp 10.244.0.2:40935->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5782089554039646361.1669043236595803342. HINFO: read udp 10.244.0.2:38156->10.0.2.3:53: i/o timeout
	
	
	==> coredns [ec26716782b5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3239537022842334118.6578162352448615161. HINFO: read udp 10.244.0.3:39308->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3239537022842334118.6578162352448615161. HINFO: read udp 10.244.0.3:42864->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3239537022842334118.6578162352448615161. HINFO: read udp 10.244.0.3:55365->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3239537022842334118.6578162352448615161. HINFO: read udp 10.244.0.3:39618->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-765000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-765000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
	                    minikube.k8s.io/name=running-upgrade-765000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T03_38_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 11:38:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-765000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 11:42:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 11:38:39 +0000   Mon, 09 Dec 2024 11:38:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 11:38:39 +0000   Mon, 09 Dec 2024 11:38:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 11:38:39 +0000   Mon, 09 Dec 2024 11:38:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 11:38:39 +0000   Mon, 09 Dec 2024 11:38:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-765000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 89638ed071d0411c978d97501fce43d3
	  System UUID:                89638ed071d0411c978d97501fce43d3
	  Boot ID:                    6bc09c7c-6283-4fa4-bc0d-56274ef457e2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-bnx2p                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m5s
	  kube-system                 coredns-6d4b75cb6d-vshr9                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m5s
	  kube-system                 etcd-running-upgrade-765000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-765000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-765000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-9njms                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-765000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-765000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-765000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-765000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-765000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-765000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-765000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-765000 status is now: NodeReady
	  Normal  RegisteredNode           4m6s                   node-controller  Node running-upgrade-765000 event: Registered Node running-upgrade-765000 in Controller
	
	
	==> dmesg <==
	[  +0.086110] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.081207] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.140323] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.081610] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.084253] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.083392] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +8.124848] systemd-fstab-generator[1935]: Ignoring "noauto" for root device
	[Dec 9 11:34] kauditd_printk_skb: 47 callbacks suppressed
	[  +1.451953] systemd-fstab-generator[2663]: Ignoring "noauto" for root device
	[  +0.157249] systemd-fstab-generator[2697]: Ignoring "noauto" for root device
	[  +0.103508] systemd-fstab-generator[2711]: Ignoring "noauto" for root device
	[  +0.114116] systemd-fstab-generator[2726]: Ignoring "noauto" for root device
	[  +5.034101] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.397313] systemd-fstab-generator[4218]: Ignoring "noauto" for root device
	[  +0.064707] systemd-fstab-generator[4230]: Ignoring "noauto" for root device
	[  +0.087361] systemd-fstab-generator[4242]: Ignoring "noauto" for root device
	[  +0.090551] systemd-fstab-generator[4256]: Ignoring "noauto" for root device
	[  +2.320967] systemd-fstab-generator[4405]: Ignoring "noauto" for root device
	[  +2.719364] systemd-fstab-generator[4745]: Ignoring "noauto" for root device
	[  +1.109480] systemd-fstab-generator[4890]: Ignoring "noauto" for root device
	[  +0.199228] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.203794] kauditd_printk_skb: 7 callbacks suppressed
	[Dec 9 11:38] systemd-fstab-generator[13689]: Ignoring "noauto" for root device
	[  +5.637155] systemd-fstab-generator[14285]: Ignoring "noauto" for root device
	[  +0.467180] systemd-fstab-generator[14419]: Ignoring "noauto" for root device
	
	
	==> etcd [53660a58d31a] <==
	{"level":"info","ts":"2024-12-09T11:38:35.443Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-12-09T11:38:35.449Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-12-09T11:38:35.450Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-09T11:38:35.450Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-09T11:38:35.450Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-09T11:38:35.450Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-09T11:38:35.450Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-09T11:38:35.758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-09T11:38:35.758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-09T11:38:35.758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-12-09T11:38:35.758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-12-09T11:38:35.758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-09T11:38:35.758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-12-09T11:38:35.758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-09T11:38:35.758Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:38:35.761Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:38:35.761Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-765000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-09T11:38:35.761Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:38:35.761Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T11:38:35.761Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T11:38:35.762Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-12-09T11:38:35.765Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T11:38:35.765Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-09T11:38:35.769Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-09T11:38:35.769Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:42:57 up 9 min,  0 users,  load average: 0.35, 0.36, 0.21
	Linux running-upgrade-765000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [ed057c2187c1] <==
	I1209 11:38:37.085728       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1209 11:38:37.085748       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1209 11:38:37.085815       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1209 11:38:37.096214       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1209 11:38:37.118762       1 cache.go:39] Caches are synced for autoregister controller
	I1209 11:38:37.118902       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1209 11:38:37.120439       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1209 11:38:37.813919       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1209 11:38:37.989301       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1209 11:38:37.990549       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1209 11:38:37.990556       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 11:38:38.121267       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 11:38:38.133212       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 11:38:38.162590       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1209 11:38:38.164688       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1209 11:38:38.165054       1 controller.go:611] quota admission added evaluator for: endpoints
	I1209 11:38:38.166308       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 11:38:39.127595       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1209 11:38:39.800112       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1209 11:38:39.808266       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1209 11:38:39.817548       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1209 11:38:39.860695       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 11:38:52.832391       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1209 11:38:52.881746       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1209 11:38:53.428419       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [6e8792ff61dd] <==
	I1209 11:38:51.974421       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1209 11:38:51.974443       1 event.go:294] "Event occurred" object="running-upgrade-765000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-765000 event: Registered Node running-upgrade-765000 in Controller"
	I1209 11:38:51.974460       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1209 11:38:51.978209       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1209 11:38:51.981009       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1209 11:38:51.981082       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1209 11:38:51.981093       1 shared_informer.go:262] Caches are synced for deployment
	I1209 11:38:51.981201       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1209 11:38:51.981344       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1209 11:38:51.981616       1 shared_informer.go:262] Caches are synced for ephemeral
	I1209 11:38:51.981626       1 shared_informer.go:262] Caches are synced for attach detach
	I1209 11:38:51.981844       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1209 11:38:51.981856       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1209 11:38:51.985445       1 shared_informer.go:262] Caches are synced for namespace
	I1209 11:38:52.030625       1 shared_informer.go:262] Caches are synced for stateful set
	I1209 11:38:52.031959       1 shared_informer.go:262] Caches are synced for daemon sets
	I1209 11:38:52.138812       1 shared_informer.go:262] Caches are synced for resource quota
	I1209 11:38:52.183937       1 shared_informer.go:262] Caches are synced for resource quota
	I1209 11:38:52.609508       1 shared_informer.go:262] Caches are synced for garbage collector
	I1209 11:38:52.651788       1 shared_informer.go:262] Caches are synced for garbage collector
	I1209 11:38:52.651880       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1209 11:38:52.833788       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1209 11:38:52.884412       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9njms"
	I1209 11:38:52.983168       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vshr9"
	I1209 11:38:52.988488       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-bnx2p"
	
	
	==> kube-proxy [4af6f6464df0] <==
	I1209 11:38:53.401125       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1209 11:38:53.401150       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1209 11:38:53.401160       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1209 11:38:53.425523       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1209 11:38:53.425532       1 server_others.go:206] "Using iptables Proxier"
	I1209 11:38:53.425543       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1209 11:38:53.425637       1 server.go:661] "Version info" version="v1.24.1"
	I1209 11:38:53.425640       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 11:38:53.426613       1 config.go:317] "Starting service config controller"
	I1209 11:38:53.426617       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1209 11:38:53.426626       1 config.go:226] "Starting endpoint slice config controller"
	I1209 11:38:53.426627       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1209 11:38:53.427699       1 config.go:444] "Starting node config controller"
	I1209 11:38:53.427703       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1209 11:38:53.527568       1 shared_informer.go:262] Caches are synced for service config
	I1209 11:38:53.527630       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1209 11:38:53.527858       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [647c80f1a4ee] <==
	W1209 11:38:37.048459       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 11:38:37.048527       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1209 11:38:37.048486       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 11:38:37.048563       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1209 11:38:37.048497       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1209 11:38:37.048591       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1209 11:38:37.048508       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 11:38:37.048633       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1209 11:38:37.048664       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 11:38:37.048698       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1209 11:38:37.048713       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 11:38:37.048755       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1209 11:38:37.048726       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1209 11:38:37.048805       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1209 11:38:37.049151       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 11:38:37.049200       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 11:38:37.974803       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 11:38:37.974825       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1209 11:38:37.992210       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 11:38:37.992223       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1209 11:38:38.013896       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 11:38:38.013910       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1209 11:38:38.037582       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 11:38:38.037674       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1209 11:38:38.443010       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-12-09 11:33:24 UTC, ends at Mon 2024-12-09 11:42:57 UTC. --
	Dec 09 11:38:42 running-upgrade-765000 kubelet[14291]: E1209 11:38:42.034876   14291 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-765000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-765000"
	Dec 09 11:38:51 running-upgrade-765000 kubelet[14291]: I1209 11:38:51.968838   14291 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 09 11:38:51 running-upgrade-765000 kubelet[14291]: I1209 11:38:51.969127   14291 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 09 11:38:51 running-upgrade-765000 kubelet[14291]: I1209 11:38:51.980318   14291 topology_manager.go:200] "Topology Admit Handler"
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: I1209 11:38:52.069750   14291 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7xv9\" (UniqueName: \"kubernetes.io/projected/3a3ef1a0-e258-45af-97a9-65fc99c1223c-kube-api-access-c7xv9\") pod \"storage-provisioner\" (UID: \"3a3ef1a0-e258-45af-97a9-65fc99c1223c\") " pod="kube-system/storage-provisioner"
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: I1209 11:38:52.069811   14291 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3a3ef1a0-e258-45af-97a9-65fc99c1223c-tmp\") pod \"storage-provisioner\" (UID: \"3a3ef1a0-e258-45af-97a9-65fc99c1223c\") " pod="kube-system/storage-provisioner"
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: E1209 11:38:52.174651   14291 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: E1209 11:38:52.174674   14291 projected.go:192] Error preparing data for projected volume kube-api-access-c7xv9 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: E1209 11:38:52.174712   14291 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/3a3ef1a0-e258-45af-97a9-65fc99c1223c-kube-api-access-c7xv9 podName:3a3ef1a0-e258-45af-97a9-65fc99c1223c nodeName:}" failed. No retries permitted until 2024-12-09 11:38:52.674698151 +0000 UTC m=+12.888098782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c7xv9" (UniqueName: "kubernetes.io/projected/3a3ef1a0-e258-45af-97a9-65fc99c1223c-kube-api-access-c7xv9") pod "storage-provisioner" (UID: "3a3ef1a0-e258-45af-97a9-65fc99c1223c") : configmap "kube-root-ca.crt" not found
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: E1209 11:38:52.676351   14291 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: E1209 11:38:52.676371   14291 projected.go:192] Error preparing data for projected volume kube-api-access-c7xv9 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: E1209 11:38:52.676398   14291 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/3a3ef1a0-e258-45af-97a9-65fc99c1223c-kube-api-access-c7xv9 podName:3a3ef1a0-e258-45af-97a9-65fc99c1223c nodeName:}" failed. No retries permitted until 2024-12-09 11:38:53.676388603 +0000 UTC m=+13.889789235 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-c7xv9" (UniqueName: "kubernetes.io/projected/3a3ef1a0-e258-45af-97a9-65fc99c1223c-kube-api-access-c7xv9") pod "storage-provisioner" (UID: "3a3ef1a0-e258-45af-97a9-65fc99c1223c") : configmap "kube-root-ca.crt" not found
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: I1209 11:38:52.886593   14291 topology_manager.go:200] "Topology Admit Handler"
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: I1209 11:38:52.979404   14291 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27fc158a-6865-432c-bfcf-efb89b7a6f77-kube-proxy\") pod \"kube-proxy-9njms\" (UID: \"27fc158a-6865-432c-bfcf-efb89b7a6f77\") " pod="kube-system/kube-proxy-9njms"
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: I1209 11:38:52.979477   14291 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27fc158a-6865-432c-bfcf-efb89b7a6f77-lib-modules\") pod \"kube-proxy-9njms\" (UID: \"27fc158a-6865-432c-bfcf-efb89b7a6f77\") " pod="kube-system/kube-proxy-9njms"
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: I1209 11:38:52.979489   14291 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8pq2\" (UniqueName: \"kubernetes.io/projected/27fc158a-6865-432c-bfcf-efb89b7a6f77-kube-api-access-l8pq2\") pod \"kube-proxy-9njms\" (UID: \"27fc158a-6865-432c-bfcf-efb89b7a6f77\") " pod="kube-system/kube-proxy-9njms"
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: I1209 11:38:52.979518   14291 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27fc158a-6865-432c-bfcf-efb89b7a6f77-xtables-lock\") pod \"kube-proxy-9njms\" (UID: \"27fc158a-6865-432c-bfcf-efb89b7a6f77\") " pod="kube-system/kube-proxy-9njms"
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: I1209 11:38:52.986677   14291 topology_manager.go:200] "Topology Admit Handler"
	Dec 09 11:38:52 running-upgrade-765000 kubelet[14291]: I1209 11:38:52.993318   14291 topology_manager.go:200] "Topology Admit Handler"
	Dec 09 11:38:53 running-upgrade-765000 kubelet[14291]: I1209 11:38:53.080417   14291 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a646da0f-99c8-4e98-bfe0-c21a39d1af56-config-volume\") pod \"coredns-6d4b75cb6d-vshr9\" (UID: \"a646da0f-99c8-4e98-bfe0-c21a39d1af56\") " pod="kube-system/coredns-6d4b75cb6d-vshr9"
	Dec 09 11:38:53 running-upgrade-765000 kubelet[14291]: I1209 11:38:53.080624   14291 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trx5h\" (UniqueName: \"kubernetes.io/projected/a646da0f-99c8-4e98-bfe0-c21a39d1af56-kube-api-access-trx5h\") pod \"coredns-6d4b75cb6d-vshr9\" (UID: \"a646da0f-99c8-4e98-bfe0-c21a39d1af56\") " pod="kube-system/coredns-6d4b75cb6d-vshr9"
	Dec 09 11:38:53 running-upgrade-765000 kubelet[14291]: I1209 11:38:53.080642   14291 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a6fb76b-b2a9-4a16-9c22-30539764bdce-config-volume\") pod \"coredns-6d4b75cb6d-bnx2p\" (UID: \"7a6fb76b-b2a9-4a16-9c22-30539764bdce\") " pod="kube-system/coredns-6d4b75cb6d-bnx2p"
	Dec 09 11:38:53 running-upgrade-765000 kubelet[14291]: I1209 11:38:53.080654   14291 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw8qq\" (UniqueName: \"kubernetes.io/projected/7a6fb76b-b2a9-4a16-9c22-30539764bdce-kube-api-access-bw8qq\") pod \"coredns-6d4b75cb6d-bnx2p\" (UID: \"7a6fb76b-b2a9-4a16-9c22-30539764bdce\") " pod="kube-system/coredns-6d4b75cb6d-bnx2p"
	Dec 09 11:42:41 running-upgrade-765000 kubelet[14291]: I1209 11:42:41.293121   14291 scope.go:110] "RemoveContainer" containerID="d649aaf9ab4098510f85251a6ff6929f9fde5e200e3e628e78a1cd1548d1efc3"
	Dec 09 11:42:41 running-upgrade-765000 kubelet[14291]: I1209 11:42:41.304946   14291 scope.go:110] "RemoveContainer" containerID="57b5cacc1bf26beb90808ee9cfa245e100e5b33ccdd2bab868a4fe064a564a50"
	
	
	==> storage-provisioner [d7b0f32df7bb] <==
	I1209 11:38:53.985501       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 11:38:53.991327       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 11:38:53.991351       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 11:38:53.996791       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 11:38:53.996904       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-765000_d999a610-28ee-4dc1-9f05-674ccba15e0b!
	I1209 11:38:53.997336       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a4caf121-fdf0-4897-a8c0-1ba4dd5b16dc", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-765000_d999a610-28ee-4dc1-9f05-674ccba15e0b became leader
	I1209 11:38:54.098468       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-765000_d999a610-28ee-4dc1-9f05-674ccba15e0b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-765000 -n running-upgrade-765000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-765000 -n running-upgrade-765000: exit status 2 (15.540222542s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-765000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-765000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-765000
--- FAIL: TestRunningBinaryUpgrade (627.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (20.97s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-504000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-504000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (12.357540875s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-504000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-504000" primary control-plane node in "kubernetes-upgrade-504000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-504000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:32:24.476972    9542 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:32:24.477537    9542 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:32:24.477549    9542 out.go:358] Setting ErrFile to fd 2...
	I1209 03:32:24.477556    9542 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:32:24.478165    9542 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:32:24.479332    9542 out.go:352] Setting JSON to false
	I1209 03:32:24.501937    9542 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5515,"bootTime":1733738429,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:32:24.502041    9542 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:32:24.519509    9542 out.go:177] * [kubernetes-upgrade-504000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:32:24.523498    9542 notify.go:220] Checking for updates...
	I1209 03:32:24.527456    9542 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:32:24.533433    9542 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:32:24.539484    9542 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:32:24.545454    9542 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:32:24.548466    9542 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:32:24.551461    9542 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:32:24.554804    9542 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:32:24.554862    9542 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:32:24.570374    9542 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:32:24.577419    9542 start.go:297] selected driver: qemu2
	I1209 03:32:24.577425    9542 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:32:24.577431    9542 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:32:24.580033    9542 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:32:24.585425    9542 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:32:24.589559    9542 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 03:32:24.589575    9542 cni.go:84] Creating CNI manager for ""
	I1209 03:32:24.589607    9542 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1209 03:32:24.589638    9542 start.go:340] cluster config:
	{Name:kubernetes-upgrade-504000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:32:24.594318    9542 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:32:24.607489    9542 out.go:177] * Starting "kubernetes-upgrade-504000" primary control-plane node in "kubernetes-upgrade-504000" cluster
	I1209 03:32:24.615452    9542 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 03:32:24.615475    9542 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 03:32:24.615489    9542 cache.go:56] Caching tarball of preloaded images
	I1209 03:32:24.615577    9542 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:32:24.615583    9542 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1209 03:32:24.615661    9542 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/kubernetes-upgrade-504000/config.json ...
	I1209 03:32:24.615673    9542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/kubernetes-upgrade-504000/config.json: {Name:mk8f6a14b9c93294951624624d736805881dde8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:32:24.617633    9542 start.go:360] acquireMachinesLock for kubernetes-upgrade-504000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:32:26.910571    9542 start.go:364] duration metric: took 2.292946583s to acquireMachinesLock for "kubernetes-upgrade-504000"
	I1209 03:32:26.910682    9542 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:32:26.910961    9542 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:32:26.925541    9542 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:32:26.974365    9542 start.go:159] libmachine.API.Create for "kubernetes-upgrade-504000" (driver="qemu2")
	I1209 03:32:26.974413    9542 client.go:168] LocalClient.Create starting
	I1209 03:32:26.974578    9542 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:32:26.974654    9542 main.go:141] libmachine: Decoding PEM data...
	I1209 03:32:26.974679    9542 main.go:141] libmachine: Parsing certificate...
	I1209 03:32:26.974743    9542 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:32:26.974800    9542 main.go:141] libmachine: Decoding PEM data...
	I1209 03:32:26.974817    9542 main.go:141] libmachine: Parsing certificate...
	I1209 03:32:26.975601    9542 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:32:27.228560    9542 main.go:141] libmachine: Creating SSH key...
	I1209 03:32:27.361790    9542 main.go:141] libmachine: Creating Disk image...
	I1209 03:32:27.361797    9542 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:32:27.362026    9542 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2
	I1209 03:32:27.372343    9542 main.go:141] libmachine: STDOUT: 
	I1209 03:32:27.372365    9542 main.go:141] libmachine: STDERR: 
	I1209 03:32:27.372428    9542 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2 +20000M
	I1209 03:32:27.380854    9542 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:32:27.380869    9542 main.go:141] libmachine: STDERR: 
	I1209 03:32:27.380886    9542 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2
	I1209 03:32:27.380893    9542 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:32:27.380907    9542 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:32:27.380937    9542 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:82:01:f4:f5:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2
	I1209 03:32:27.382885    9542 main.go:141] libmachine: STDOUT: 
	I1209 03:32:27.382900    9542 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:32:27.382931    9542 client.go:171] duration metric: took 408.517917ms to LocalClient.Create
	I1209 03:32:29.385064    9542 start.go:128] duration metric: took 2.474119458s to createHost
	I1209 03:32:29.385141    9542 start.go:83] releasing machines lock for "kubernetes-upgrade-504000", held for 2.474580833s
	W1209 03:32:29.385207    9542 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:32:29.398619    9542 out.go:177] * Deleting "kubernetes-upgrade-504000" in qemu2 ...
	W1209 03:32:29.441486    9542 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:32:29.441516    9542 start.go:729] Will try again in 5 seconds ...
	I1209 03:32:34.443486    9542 start.go:360] acquireMachinesLock for kubernetes-upgrade-504000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:32:34.443553    9542 start.go:364] duration metric: took 50.833µs to acquireMachinesLock for "kubernetes-upgrade-504000"
	I1209 03:32:34.443569    9542 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:32:34.443629    9542 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:32:34.448313    9542 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:32:34.464516    9542 start.go:159] libmachine.API.Create for "kubernetes-upgrade-504000" (driver="qemu2")
	I1209 03:32:34.464540    9542 client.go:168] LocalClient.Create starting
	I1209 03:32:34.464591    9542 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:32:34.464622    9542 main.go:141] libmachine: Decoding PEM data...
	I1209 03:32:34.464629    9542 main.go:141] libmachine: Parsing certificate...
	I1209 03:32:34.464662    9542 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:32:34.464681    9542 main.go:141] libmachine: Decoding PEM data...
	I1209 03:32:34.464686    9542 main.go:141] libmachine: Parsing certificate...
	I1209 03:32:34.465027    9542 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:32:34.662029    9542 main.go:141] libmachine: Creating SSH key...
	I1209 03:32:34.733905    9542 main.go:141] libmachine: Creating Disk image...
	I1209 03:32:34.733912    9542 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:32:34.734121    9542 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2
	I1209 03:32:34.744808    9542 main.go:141] libmachine: STDOUT: 
	I1209 03:32:34.744834    9542 main.go:141] libmachine: STDERR: 
	I1209 03:32:34.744911    9542 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2 +20000M
	I1209 03:32:34.753992    9542 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:32:34.754009    9542 main.go:141] libmachine: STDERR: 
	I1209 03:32:34.754023    9542 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2
	I1209 03:32:34.754030    9542 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:32:34.754042    9542 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:32:34.754068    9542 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:bb:0f:36:dd:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2
	I1209 03:32:34.756016    9542 main.go:141] libmachine: STDOUT: 
	I1209 03:32:34.756031    9542 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:32:34.756045    9542 client.go:171] duration metric: took 291.507ms to LocalClient.Create
	I1209 03:32:36.758203    9542 start.go:128] duration metric: took 2.314590625s to createHost
	I1209 03:32:36.758295    9542 start.go:83] releasing machines lock for "kubernetes-upgrade-504000", held for 2.314773958s
	W1209 03:32:36.758678    9542 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-504000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-504000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:32:36.768301    9542 out.go:201] 
	W1209 03:32:36.772455    9542 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:32:36.772506    9542 out.go:270] * 
	* 
	W1209 03:32:36.774913    9542 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:32:36.784432    9542 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-504000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-504000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-504000: (3.157341792s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-504000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-504000 status --format={{.Host}}: exit status 7 (68.661875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-504000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-504000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.19984325s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-504000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-504000" primary control-plane node in "kubernetes-upgrade-504000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:32:40.060773    9602 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:32:40.060941    9602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:32:40.060944    9602 out.go:358] Setting ErrFile to fd 2...
	I1209 03:32:40.060947    9602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:32:40.061080    9602 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:32:40.062190    9602 out.go:352] Setting JSON to false
	I1209 03:32:40.079716    9602 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5531,"bootTime":1733738429,"procs":546,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:32:40.079786    9602 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:32:40.085143    9602 out.go:177] * [kubernetes-upgrade-504000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:32:40.093112    9602 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:32:40.093139    9602 notify.go:220] Checking for updates...
	I1209 03:32:40.100054    9602 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:32:40.103065    9602 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:32:40.106118    9602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:32:40.107510    9602 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:32:40.110073    9602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:32:40.113343    9602 config.go:182] Loaded profile config "kubernetes-upgrade-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1209 03:32:40.113607    9602 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:32:40.120108    9602 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:32:40.128068    9602 start.go:297] selected driver: qemu2
	I1209 03:32:40.128074    9602 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:32:40.128119    9602 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:32:40.130550    9602 cni.go:84] Creating CNI manager for ""
	I1209 03:32:40.130576    9602 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:32:40.130600    9602 start.go:340] cluster config:
	{Name:kubernetes-upgrade-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-504000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:32:40.134875    9602 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:32:40.143067    9602 out.go:177] * Starting "kubernetes-upgrade-504000" primary control-plane node in "kubernetes-upgrade-504000" cluster
	I1209 03:32:40.146140    9602 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:32:40.146154    9602 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:32:40.146163    9602 cache.go:56] Caching tarball of preloaded images
	I1209 03:32:40.146241    9602 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:32:40.146247    9602 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:32:40.146299    9602 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/kubernetes-upgrade-504000/config.json ...
	I1209 03:32:40.146836    9602 start.go:360] acquireMachinesLock for kubernetes-upgrade-504000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:32:40.146887    9602 start.go:364] duration metric: took 43.459µs to acquireMachinesLock for "kubernetes-upgrade-504000"
	I1209 03:32:40.146896    9602 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:32:40.146901    9602 fix.go:54] fixHost starting: 
	I1209 03:32:40.147027    9602 fix.go:112] recreateIfNeeded on kubernetes-upgrade-504000: state=Stopped err=<nil>
	W1209 03:32:40.147037    9602 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:32:40.151057    9602 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-504000" ...
	I1209 03:32:40.157052    9602 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:32:40.157096    9602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:bb:0f:36:dd:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2
	I1209 03:32:40.159391    9602 main.go:141] libmachine: STDOUT: 
	I1209 03:32:40.159412    9602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:32:40.159443    9602 fix.go:56] duration metric: took 12.541792ms for fixHost
	I1209 03:32:40.159449    9602 start.go:83] releasing machines lock for "kubernetes-upgrade-504000", held for 12.556584ms
	W1209 03:32:40.159454    9602 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:32:40.159492    9602 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:32:40.159497    9602 start.go:729] Will try again in 5 seconds ...
	I1209 03:32:45.160619    9602 start.go:360] acquireMachinesLock for kubernetes-upgrade-504000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:32:45.161126    9602 start.go:364] duration metric: took 419.25µs to acquireMachinesLock for "kubernetes-upgrade-504000"
	I1209 03:32:45.161266    9602 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:32:45.161287    9602 fix.go:54] fixHost starting: 
	I1209 03:32:45.161983    9602 fix.go:112] recreateIfNeeded on kubernetes-upgrade-504000: state=Stopped err=<nil>
	W1209 03:32:45.162009    9602 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:32:45.172148    9602 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-504000" ...
	I1209 03:32:45.177992    9602 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:32:45.178187    9602 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:bb:0f:36:dd:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubernetes-upgrade-504000/disk.qcow2
	I1209 03:32:45.189235    9602 main.go:141] libmachine: STDOUT: 
	I1209 03:32:45.189296    9602 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:32:45.189389    9602 fix.go:56] duration metric: took 28.102ms for fixHost
	I1209 03:32:45.189412    9602 start.go:83] releasing machines lock for "kubernetes-upgrade-504000", held for 28.26175ms
	W1209 03:32:45.189662    9602 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-504000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-504000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:32:45.197902    9602 out.go:201] 
	W1209 03:32:45.202159    9602 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:32:45.202185    9602 out.go:270] * 
	* 
	W1209 03:32:45.204328    9602 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:32:45.214054    9602 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-504000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-504000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-504000 version --output=json: exit status 1 (62.337084ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-504000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-12-09 03:32:45.29081 -0800 PST m=+670.421376418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-504000 -n kubernetes-upgrade-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-504000 -n kubernetes-upgrade-504000: exit status 7 (38.146334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-504000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-504000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-504000
--- FAIL: TestKubernetesUpgrade (20.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (593.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.254161877 start -p stopped-upgrade-416000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.254161877 start -p stopped-upgrade-416000 --memory=2200 --vm-driver=qemu2 : (54.988415459s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.254161877 -p stopped-upgrade-416000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.254161877 -p stopped-upgrade-416000 stop: (12.109655875s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-416000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-416000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m46.07609325s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-416000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-416000" primary control-plane node in "stopped-upgrade-416000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-416000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:33:42.816831    9647 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:33:42.818050    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:33:42.818070    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:33:42.818083    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:33:42.818356    9647 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:33:42.820073    9647 out.go:352] Setting JSON to false
	I1209 03:33:42.840083    9647 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5593,"bootTime":1733738429,"procs":542,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:33:42.840596    9647 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:33:42.844374    9647 out.go:177] * [stopped-upgrade-416000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:33:42.852807    9647 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:33:42.852953    9647 notify.go:220] Checking for updates...
	I1209 03:33:42.861359    9647 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:33:42.865325    9647 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:33:42.868343    9647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:33:42.871399    9647 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:33:42.874346    9647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:33:42.877645    9647 config.go:182] Loaded profile config "stopped-upgrade-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 03:33:42.881530    9647 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 03:33:42.884834    9647 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:33:42.889337    9647 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:33:42.895278    9647 start.go:297] selected driver: qemu2
	I1209 03:33:42.895293    9647 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 03:33:42.895333    9647 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:33:42.898417    9647 cni.go:84] Creating CNI manager for ""
	I1209 03:33:42.898607    9647 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:33:42.898767    9647 start.go:340] cluster config:
	{Name:stopped-upgrade-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 03:33:42.898837    9647 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:33:42.906299    9647 out.go:177] * Starting "stopped-upgrade-416000" primary control-plane node in "stopped-upgrade-416000" cluster
	I1209 03:33:42.910338    9647 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1209 03:33:42.910355    9647 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1209 03:33:42.910397    9647 cache.go:56] Caching tarball of preloaded images
	I1209 03:33:42.910470    9647 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:33:42.910479    9647 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1209 03:33:42.910553    9647 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/config.json ...
	I1209 03:33:42.911125    9647 start.go:360] acquireMachinesLock for stopped-upgrade-416000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:33:42.911154    9647 start.go:364] duration metric: took 23.375µs to acquireMachinesLock for "stopped-upgrade-416000"
	I1209 03:33:42.911162    9647 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:33:42.911175    9647 fix.go:54] fixHost starting: 
	I1209 03:33:42.911288    9647 fix.go:112] recreateIfNeeded on stopped-upgrade-416000: state=Stopped err=<nil>
	W1209 03:33:42.911296    9647 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:33:42.920312    9647 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-416000" ...
	I1209 03:33:42.924348    9647 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:33:42.924609    9647 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/qemu.pid -nic user,model=virtio,hostfwd=tcp::60489-:22,hostfwd=tcp::60490-:2376,hostname=stopped-upgrade-416000 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/disk.qcow2
	I1209 03:33:42.970891    9647 main.go:141] libmachine: STDOUT: 
	I1209 03:33:42.970911    9647 main.go:141] libmachine: STDERR: 
	I1209 03:33:42.970917    9647 main.go:141] libmachine: Waiting for VM to start (ssh -p 60489 docker@127.0.0.1)...
	I1209 03:34:02.794909    9647 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/config.json ...
	I1209 03:34:02.795168    9647 machine.go:93] provisionDockerMachine start ...
	I1209 03:34:02.795240    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:02.795403    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:02.795407    9647 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 03:34:02.866807    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 03:34:02.866854    9647 buildroot.go:166] provisioning hostname "stopped-upgrade-416000"
	I1209 03:34:02.866939    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:02.867050    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:02.867056    9647 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-416000 && echo "stopped-upgrade-416000" | sudo tee /etc/hostname
	I1209 03:34:02.938278    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-416000
	
	I1209 03:34:02.938359    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:02.938482    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:02.938489    9647 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-416000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-416000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-416000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 03:34:03.009417    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 03:34:03.009430    9647 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20068-6536/.minikube CaCertPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20068-6536/.minikube}
	I1209 03:34:03.009439    9647 buildroot.go:174] setting up certificates
	I1209 03:34:03.009444    9647 provision.go:84] configureAuth start
	I1209 03:34:03.009455    9647 provision.go:143] copyHostCerts
	I1209 03:34:03.009545    9647 exec_runner.go:144] found /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.pem, removing ...
	I1209 03:34:03.009574    9647 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.pem
	I1209 03:34:03.009676    9647 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.pem (1078 bytes)
	I1209 03:34:03.009842    9647 exec_runner.go:144] found /Users/jenkins/minikube-integration/20068-6536/.minikube/cert.pem, removing ...
	I1209 03:34:03.009846    9647 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20068-6536/.minikube/cert.pem
	I1209 03:34:03.009893    9647 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20068-6536/.minikube/cert.pem (1123 bytes)
	I1209 03:34:03.010024    9647 exec_runner.go:144] found /Users/jenkins/minikube-integration/20068-6536/.minikube/key.pem, removing ...
	I1209 03:34:03.010029    9647 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20068-6536/.minikube/key.pem
	I1209 03:34:03.010071    9647 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20068-6536/.minikube/key.pem (1675 bytes)
	I1209 03:34:03.010172    9647 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-416000 san=[127.0.0.1 localhost minikube stopped-upgrade-416000]
	I1209 03:34:03.208189    9647 provision.go:177] copyRemoteCerts
	I1209 03:34:03.208272    9647 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 03:34:03.208281    9647 sshutil.go:53] new ssh client: &{IP:localhost Port:60489 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/id_rsa Username:docker}
	I1209 03:34:03.244794    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 03:34:03.252797    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 03:34:03.261330    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 03:34:03.269384    9647 provision.go:87] duration metric: took 259.930667ms to configureAuth
	I1209 03:34:03.269399    9647 buildroot.go:189] setting minikube options for container-runtime
	I1209 03:34:03.269550    9647 config.go:182] Loaded profile config "stopped-upgrade-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 03:34:03.269607    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:03.269704    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:03.269710    9647 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1209 03:34:03.340084    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1209 03:34:03.340096    9647 buildroot.go:70] root file system type: tmpfs
	I1209 03:34:03.340165    9647 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1209 03:34:03.340248    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:03.340372    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:03.340412    9647 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1209 03:34:03.412405    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1209 03:34:03.412478    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:03.412591    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:03.412601    9647 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1209 03:34:03.799847    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1209 03:34:03.799864    9647 machine.go:96] duration metric: took 1.004709292s to provisionDockerMachine
	I1209 03:34:03.799871    9647 start.go:293] postStartSetup for "stopped-upgrade-416000" (driver="qemu2")
	I1209 03:34:03.799877    9647 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 03:34:03.799951    9647 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 03:34:03.799963    9647 sshutil.go:53] new ssh client: &{IP:localhost Port:60489 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/id_rsa Username:docker}
	I1209 03:34:03.838458    9647 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 03:34:03.840072    9647 info.go:137] Remote host: Buildroot 2021.02.12
	I1209 03:34:03.840083    9647 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20068-6536/.minikube/addons for local assets ...
	I1209 03:34:03.840166    9647 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20068-6536/.minikube/files for local assets ...
	I1209 03:34:03.840265    9647 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem -> 78202.pem in /etc/ssl/certs
	I1209 03:34:03.840377    9647 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 03:34:03.845422    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem --> /etc/ssl/certs/78202.pem (1708 bytes)
	I1209 03:34:03.856053    9647 start.go:296] duration metric: took 56.175583ms for postStartSetup
	I1209 03:34:03.856075    9647 fix.go:56] duration metric: took 20.945300875s for fixHost
	I1209 03:34:03.856134    9647 main.go:141] libmachine: Using SSH client type: native
	I1209 03:34:03.856249    9647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10116efc0] 0x101171800 <nil>  [] 0s} localhost 60489 <nil> <nil>}
	I1209 03:34:03.856257    9647 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 03:34:03.922981    9647 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733744044.217951046
	
	I1209 03:34:03.922994    9647 fix.go:216] guest clock: 1733744044.217951046
	I1209 03:34:03.923000    9647 fix.go:229] Guest: 2024-12-09 03:34:04.217951046 -0800 PST Remote: 2024-12-09 03:34:03.856076 -0800 PST m=+21.147266376 (delta=361.875046ms)
	I1209 03:34:03.923012    9647 fix.go:200] guest clock delta is within tolerance: 361.875046ms
	I1209 03:34:03.923014    9647 start.go:83] releasing machines lock for "stopped-upgrade-416000", held for 21.012249875s
	I1209 03:34:03.923093    9647 ssh_runner.go:195] Run: cat /version.json
	I1209 03:34:03.923103    9647 sshutil.go:53] new ssh client: &{IP:localhost Port:60489 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/id_rsa Username:docker}
	I1209 03:34:03.923176    9647 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 03:34:03.924000    9647 sshutil.go:53] new ssh client: &{IP:localhost Port:60489 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/id_rsa Username:docker}
	W1209 03:34:04.003528    9647 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1209 03:34:04.003602    9647 ssh_runner.go:195] Run: systemctl --version
	I1209 03:34:04.006059    9647 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 03:34:04.008086    9647 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 03:34:04.008137    9647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1209 03:34:04.011460    9647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1209 03:34:04.016358    9647 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 03:34:04.016368    9647 start.go:495] detecting cgroup driver to use...
	I1209 03:34:04.016475    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 03:34:04.023456    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1209 03:34:04.026421    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 03:34:04.029333    9647 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 03:34:04.029367    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 03:34:04.032700    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 03:34:04.036352    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 03:34:04.039861    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 03:34:04.043066    9647 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 03:34:04.046011    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 03:34:04.048996    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 03:34:04.052290    9647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 03:34:04.055672    9647 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 03:34:04.058696    9647 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 03:34:04.061182    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:04.140573    9647 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 03:34:04.152245    9647 start.go:495] detecting cgroup driver to use...
	I1209 03:34:04.152352    9647 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1209 03:34:04.161096    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 03:34:04.165870    9647 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 03:34:04.172000    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 03:34:04.177166    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 03:34:04.182447    9647 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 03:34:04.219053    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 03:34:04.223720    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 03:34:04.229184    9647 ssh_runner.go:195] Run: which cri-dockerd
	I1209 03:34:04.230597    9647 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1209 03:34:04.233197    9647 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1209 03:34:04.238342    9647 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1209 03:34:04.315959    9647 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1209 03:34:04.396893    9647 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1209 03:34:04.396951    9647 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1209 03:34:04.402276    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:04.482870    9647 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 03:34:05.608912    9647 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.126048334s)
	I1209 03:34:05.608988    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1209 03:34:05.614415    9647 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1209 03:34:05.621066    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 03:34:05.626241    9647 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1209 03:34:05.705338    9647 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1209 03:34:05.796042    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:05.882684    9647 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1209 03:34:05.888241    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 03:34:05.892930    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:05.974355    9647 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1209 03:34:06.013161    9647 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1209 03:34:06.013255    9647 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1209 03:34:06.015780    9647 start.go:563] Will wait 60s for crictl version
	I1209 03:34:06.015845    9647 ssh_runner.go:195] Run: which crictl
	I1209 03:34:06.017314    9647 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 03:34:06.032479    9647 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1209 03:34:06.032560    9647 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 03:34:06.049478    9647 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 03:34:06.066839    9647 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1209 03:34:06.066926    9647 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1209 03:34:06.068118    9647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 03:34:06.072392    9647 kubeadm.go:883] updating cluster {Name:stopped-upgrade-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1209 03:34:06.072446    9647 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1209 03:34:06.072500    9647 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 03:34:06.083597    9647 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 03:34:06.083606    9647 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 03:34:06.083668    9647 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 03:34:06.086895    9647 ssh_runner.go:195] Run: which lz4
	I1209 03:34:06.088116    9647 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 03:34:06.089462    9647 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 03:34:06.089476    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1209 03:34:07.001379    9647 docker.go:653] duration metric: took 913.325958ms to copy over tarball
	I1209 03:34:07.001452    9647 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 03:34:08.195759    9647 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.194315084s)
	I1209 03:34:08.195772    9647 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 03:34:08.211396    9647 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 03:34:08.214390    9647 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1209 03:34:08.219128    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:08.303514    9647 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 03:34:09.928134    9647 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.624636042s)
	I1209 03:34:09.928492    9647 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 03:34:09.943672    9647 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 03:34:09.943680    9647 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 03:34:09.943687    9647 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 03:34:09.951896    9647 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:09.954028    9647 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:09.955947    9647 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:09.956002    9647 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:09.958044    9647 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:09.958088    9647 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:09.959654    9647 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:09.959856    9647 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:09.960404    9647 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1209 03:34:09.961609    9647 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:09.962288    9647 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:09.962936    9647 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:09.962953    9647 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1209 03:34:09.963439    9647 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:09.964763    9647 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:09.964772    9647 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:10.430931    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:10.443427    9647 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1209 03:34:10.443825    9647 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:10.443890    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1209 03:34:10.456428    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1209 03:34:10.474630    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:10.486984    9647 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1209 03:34:10.487014    9647 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:10.487111    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1209 03:34:10.499706    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1209 03:34:10.504167    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:10.515763    9647 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1209 03:34:10.515786    9647 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:10.515855    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1209 03:34:10.528292    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1209 03:34:10.594482    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1209 03:34:10.606563    9647 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1209 03:34:10.606598    9647 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1209 03:34:10.606666    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1209 03:34:10.618923    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1209 03:34:10.619053    9647 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1209 03:34:10.620801    9647 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1209 03:34:10.620814    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1209 03:34:10.629397    9647 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1209 03:34:10.629418    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1209 03:34:10.660043    9647 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1209 03:34:10.712597    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:10.722965    9647 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1209 03:34:10.722988    9647 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:10.723054    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 03:34:10.732729    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1209 03:34:10.776605    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:10.787165    9647 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1209 03:34:10.787196    9647 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:10.787264    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1209 03:34:10.796981    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W1209 03:34:10.840311    9647 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1209 03:34:10.840638    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:10.850665    9647 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1209 03:34:10.850689    9647 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:10.850754    9647 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1209 03:34:10.860778    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1209 03:34:10.860926    9647 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1209 03:34:10.862606    9647 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1209 03:34:10.862626    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1209 03:34:10.901152    9647 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1209 03:34:10.901169    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1209 03:34:10.936899    9647 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1209 03:34:11.153373    9647 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1209 03:34:11.154522    9647 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:11.175521    9647 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1209 03:34:11.175558    9647 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:11.175658    9647 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:34:11.195514    9647 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 03:34:11.195677    9647 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 03:34:11.197460    9647 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1209 03:34:11.197471    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1209 03:34:11.230898    9647 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 03:34:11.230912    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1209 03:34:11.464532    9647 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 03:34:11.464572    9647 cache_images.go:92] duration metric: took 1.520906833s to LoadCachedImages
	W1209 03:34:11.464608    9647 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I1209 03:34:11.464616    9647 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1209 03:34:11.464673    9647 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-416000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 03:34:11.464746    9647 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1209 03:34:11.478338    9647 cni.go:84] Creating CNI manager for ""
	I1209 03:34:11.478350    9647 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:34:11.478606    9647 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 03:34:11.478622    9647 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-416000 NodeName:stopped-upgrade-416000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 03:34:11.478697    9647 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-416000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 03:34:11.478765    9647 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1209 03:34:11.481592    9647 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 03:34:11.481630    9647 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 03:34:11.484332    9647 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1209 03:34:11.489629    9647 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 03:34:11.494739    9647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1209 03:34:11.500287    9647 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1209 03:34:11.501640    9647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 03:34:11.505154    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:34:11.583568    9647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:34:11.591916    9647 certs.go:68] Setting up /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000 for IP: 10.0.2.15
	I1209 03:34:11.591927    9647 certs.go:194] generating shared ca certs ...
	I1209 03:34:11.591937    9647 certs.go:226] acquiring lock for ca certs: {Name:mkab7ef03786804c126b265c91619df81c881ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:11.592354    9647 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.key
	I1209 03:34:11.592581    9647 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/proxy-client-ca.key
	I1209 03:34:11.592600    9647 certs.go:256] generating profile certs ...
	I1209 03:34:11.593262    9647 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/client.key
	I1209 03:34:11.593280    9647 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.key.ff526b50
	I1209 03:34:11.593290    9647 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.crt.ff526b50 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1209 03:34:11.730240    9647 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.crt.ff526b50 ...
	I1209 03:34:11.730257    9647 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.crt.ff526b50: {Name:mk9f53df097e6cd17fb158ce3b910804aa4c0973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:11.730609    9647 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.key.ff526b50 ...
	I1209 03:34:11.730614    9647 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.key.ff526b50: {Name:mk2653b45057ab70adba95a9012e2d47f2c51c06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:11.730792    9647 certs.go:381] copying /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.crt.ff526b50 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.crt
	I1209 03:34:11.730939    9647 certs.go:385] copying /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.key.ff526b50 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.key
	I1209 03:34:11.731301    9647 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/proxy-client.key
	I1209 03:34:11.731513    9647 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/7820.pem (1338 bytes)
	W1209 03:34:11.731747    9647 certs.go:480] ignoring /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/7820_empty.pem, impossibly tiny 0 bytes
	I1209 03:34:11.731759    9647 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 03:34:11.731786    9647 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem (1078 bytes)
	I1209 03:34:11.731807    9647 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem (1123 bytes)
	I1209 03:34:11.731828    9647 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/key.pem (1675 bytes)
	I1209 03:34:11.731874    9647 certs.go:484] found cert: /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem (1708 bytes)
	I1209 03:34:11.734370    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 03:34:11.741287    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 03:34:11.748013    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 03:34:11.755561    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 03:34:11.762185    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 03:34:11.768865    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 03:34:11.775886    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 03:34:11.782985    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 03:34:11.789595    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/7820.pem --> /usr/share/ca-certificates/7820.pem (1338 bytes)
	I1209 03:34:11.796075    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/ssl/certs/78202.pem --> /usr/share/ca-certificates/78202.pem (1708 bytes)
	I1209 03:34:11.803072    9647 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 03:34:11.809548    9647 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 03:34:11.814634    9647 ssh_runner.go:195] Run: openssl version
	I1209 03:34:11.816436    9647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 03:34:11.819370    9647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:34:11.820754    9647 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:34:11.820775    9647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 03:34:11.822590    9647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 03:34:11.825392    9647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7820.pem && ln -fs /usr/share/ca-certificates/7820.pem /etc/ssl/certs/7820.pem"
	I1209 03:34:11.828718    9647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7820.pem
	I1209 03:34:11.830172    9647 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 11:22 /usr/share/ca-certificates/7820.pem
	I1209 03:34:11.830196    9647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7820.pem
	I1209 03:34:11.831970    9647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7820.pem /etc/ssl/certs/51391683.0"
	I1209 03:34:11.835055    9647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78202.pem && ln -fs /usr/share/ca-certificates/78202.pem /etc/ssl/certs/78202.pem"
	I1209 03:34:11.838111    9647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78202.pem
	I1209 03:34:11.839417    9647 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 11:22 /usr/share/ca-certificates/78202.pem
	I1209 03:34:11.839444    9647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78202.pem
	I1209 03:34:11.841261    9647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78202.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 03:34:11.844542    9647 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 03:34:11.846223    9647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 03:34:11.848182    9647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 03:34:11.850070    9647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 03:34:11.852105    9647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 03:34:11.853984    9647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 03:34:11.855673    9647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 03:34:11.859125    9647 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:60521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 03:34:11.859207    9647 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 03:34:11.869081    9647 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 03:34:11.872486    9647 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 03:34:11.872659    9647 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 03:34:11.872689    9647 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 03:34:11.875582    9647 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 03:34:11.875801    9647 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-416000" does not appear in /Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:34:11.875824    9647 kubeconfig.go:62] /Users/jenkins/minikube-integration/20068-6536/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-416000" cluster setting kubeconfig missing "stopped-upgrade-416000" context setting]
	I1209 03:34:11.875987    9647 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/kubeconfig: {Name:mkcab6edf2c02dd56919c96ee93c72d0b668d23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:34:11.877633    9647 kapi.go:59] client config for stopped-upgrade-416000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/client.key", CAFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102bcb740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 03:34:11.883332    9647 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 03:34:11.886043    9647 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-416000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1209 03:34:11.886048    9647 kubeadm.go:1160] stopping kube-system containers ...
	I1209 03:34:11.886093    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 03:34:11.896540    9647 docker.go:483] Stopping containers: [a572daa6beda 8e04376e2372 5302a3675333 5b19c97e6b50 8c74a6bfa12f 30c1dd3114a2 e540ad2ee556 31622873173a]
	I1209 03:34:11.896617    9647 ssh_runner.go:195] Run: docker stop a572daa6beda 8e04376e2372 5302a3675333 5b19c97e6b50 8c74a6bfa12f 30c1dd3114a2 e540ad2ee556 31622873173a
	I1209 03:34:11.906994    9647 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 03:34:11.912431    9647 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 03:34:11.915623    9647 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 03:34:11.915628    9647 kubeadm.go:157] found existing configuration files:
	
	I1209 03:34:11.915655    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/admin.conf
	I1209 03:34:11.918135    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 03:34:11.918165    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 03:34:11.920790    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/kubelet.conf
	I1209 03:34:11.923852    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 03:34:11.923887    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 03:34:11.926769    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/controller-manager.conf
	I1209 03:34:11.929146    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 03:34:11.929180    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 03:34:11.932140    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/scheduler.conf
	I1209 03:34:11.934820    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 03:34:11.934850    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 03:34:11.937291    9647 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 03:34:11.940465    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:11.962525    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:12.378476    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:12.512080    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:12.542009    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 03:34:12.562588    9647 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:34:12.562698    9647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:34:13.064754    9647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:34:13.564839    9647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:34:13.572476    9647 api_server.go:72] duration metric: took 1.009903792s to wait for apiserver process to appear ...
	I1209 03:34:13.572487    9647 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:34:13.572710    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:18.575681    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:18.575773    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:23.576520    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:23.576550    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:28.577510    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:28.577530    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:33.578459    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:33.578497    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:38.579795    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:38.579836    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:43.581410    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:43.581429    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:48.583440    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:48.583498    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:53.585719    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:53.585768    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:34:58.586483    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:34:58.586579    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:03.589050    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:03.589105    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:08.591519    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:08.591568    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:13.593858    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:13.595009    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:13.610807    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:35:13.610909    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:13.630786    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:35:13.630872    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:13.641086    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:35:13.641178    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:13.651192    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:35:13.651278    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:13.661414    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:35:13.661496    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:13.671940    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:35:13.672025    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:13.682354    9647 logs.go:282] 0 containers: []
	W1209 03:35:13.682366    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:13.682437    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:13.692996    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:35:13.693014    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:35:13.693021    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:35:13.704904    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:35:13.704917    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:35:13.716569    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:35:13.716582    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:13.729349    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:13.729361    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:13.733441    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:35:13.733449    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:35:13.748481    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:35:13.748492    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:35:13.763283    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:13.763293    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:13.800027    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:13.800037    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:13.906630    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:35:13.906643    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:35:13.918488    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:35:13.918499    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:35:13.936860    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:35:13.936872    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:35:13.963088    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:35:13.963101    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:35:13.978149    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:35:13.978162    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:35:13.995209    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:35:13.995223    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:35:14.006465    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:14.006486    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:14.030727    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:35:14.030738    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:35:14.044568    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:35:14.044585    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:35:16.557762    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:21.558017    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:21.558271    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:21.580397    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:35:21.580506    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:21.594834    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:35:21.594918    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:21.606915    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:35:21.606995    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:21.617634    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:35:21.617720    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:21.634040    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:35:21.634121    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:21.644394    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:35:21.644474    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:21.654338    9647 logs.go:282] 0 containers: []
	W1209 03:35:21.654348    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:21.654416    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:21.664775    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:35:21.664792    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:35:21.664797    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:35:21.689610    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:35:21.689624    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:35:21.704455    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:35:21.704465    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:35:21.719737    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:35:21.719747    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:35:21.731851    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:35:21.731865    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:35:21.746240    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:35:21.746250    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:35:21.760164    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:35:21.760178    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:35:21.785278    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:21.785289    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:21.809834    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:35:21.809844    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:21.822020    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:21.822031    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:21.858602    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:35:21.858616    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:35:21.874034    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:35:21.874044    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:35:21.885439    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:35:21.885452    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:35:21.898115    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:21.898125    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:21.936175    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:21.936184    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:21.940679    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:35:21.940686    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:35:21.952649    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:35:21.952660    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:35:24.466385    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:29.468777    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:29.468906    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:29.480383    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:35:29.480464    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:29.492155    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:35:29.492246    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:29.503309    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:35:29.503396    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:29.514312    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:35:29.514398    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:29.525161    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:35:29.525244    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:29.536183    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:35:29.536323    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:29.547640    9647 logs.go:282] 0 containers: []
	W1209 03:35:29.547651    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:29.547719    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:29.562802    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:35:29.562817    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:35:29.562822    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:35:29.577676    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:29.577693    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:29.605030    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:29.605041    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:29.645822    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:35:29.645833    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:35:29.659748    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:35:29.659758    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:35:29.673694    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:35:29.673707    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:35:29.691261    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:35:29.691270    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:35:29.704039    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:29.704049    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:29.740163    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:35:29.740175    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:35:29.753645    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:35:29.753658    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:35:29.768589    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:35:29.768603    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:29.782455    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:35:29.782467    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:35:29.797010    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:35:29.797025    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:35:29.811845    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:35:29.811855    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:35:29.827846    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:29.827859    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:29.832108    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:35:29.832113    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:35:29.856645    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:35:29.856654    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:35:32.370283    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:37.372438    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:37.372534    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:37.384218    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:35:37.384302    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:37.396303    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:35:37.396387    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:37.409172    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:35:37.409251    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:37.426461    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:35:37.426541    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:37.437524    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:35:37.437616    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:37.448951    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:35:37.449035    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:37.460730    9647 logs.go:282] 0 containers: []
	W1209 03:35:37.460745    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:37.460822    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:37.472579    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:35:37.472596    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:35:37.472603    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:35:37.485888    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:35:37.485900    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:35:37.504313    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:35:37.504325    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:35:37.516685    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:35:37.516695    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:35:37.532409    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:37.532421    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:37.560017    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:35:37.560032    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:37.576775    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:35:37.576789    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:35:37.590907    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:35:37.590921    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:35:37.603338    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:35:37.603349    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:35:37.630868    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:35:37.630880    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:35:37.646761    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:35:37.646771    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:35:37.658217    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:37.658231    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:37.697492    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:35:37.697501    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:35:37.718016    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:35:37.718027    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:35:37.731772    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:35:37.731783    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:35:37.749826    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:37.749837    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:37.753935    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:37.753943    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:40.291165    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:45.294776    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:45.294879    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:45.306257    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:35:45.306343    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:45.317239    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:35:45.317328    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:45.328866    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:35:45.328948    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:45.339969    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:35:45.340051    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:45.351445    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:35:45.351531    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:45.363103    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:35:45.363189    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:45.375053    9647 logs.go:282] 0 containers: []
	W1209 03:35:45.375066    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:45.375147    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:45.390579    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:35:45.390599    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:45.390608    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:45.429562    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:35:45.429575    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:35:45.445669    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:45.445680    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:45.472216    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:35:45.472242    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:35:45.487072    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:35:45.487087    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:35:45.502750    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:35:45.502762    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:35:45.521081    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:35:45.521096    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:35:45.533218    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:35:45.533231    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:45.550635    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:45.550645    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:45.591916    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:45.591928    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:45.596578    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:35:45.596589    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:35:45.629489    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:35:45.629501    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:35:45.641189    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:35:45.641201    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:35:45.656114    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:35:45.656123    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:35:45.675720    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:35:45.675732    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:35:45.689705    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:35:45.689716    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:35:45.702174    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:35:45.702186    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:35:48.219171    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:35:53.219669    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:35:53.219760    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:35:53.230653    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:35:53.230749    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:35:53.241003    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:35:53.241088    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:35:53.252593    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:35:53.252677    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:35:53.263792    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:35:53.263883    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:35:53.275135    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:35:53.275216    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:35:53.286909    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:35:53.286988    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:35:53.298164    9647 logs.go:282] 0 containers: []
	W1209 03:35:53.298175    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:35:53.298245    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:35:53.309846    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:35:53.309863    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:35:53.309869    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:35:53.314254    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:35:53.314266    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:35:53.328760    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:35:53.328774    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:35:53.344256    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:35:53.344271    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:35:53.356409    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:35:53.356418    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:35:53.396009    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:35:53.396033    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:35:53.408591    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:35:53.408604    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:35:53.424826    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:35:53.424836    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:35:53.451026    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:35:53.451036    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:35:53.469838    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:35:53.469851    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:35:53.482021    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:35:53.482035    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:35:53.500842    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:35:53.500857    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:35:53.513339    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:35:53.513352    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:35:53.525916    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:35:53.525926    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:35:53.564509    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:35:53.564525    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:35:53.590118    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:35:53.590130    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:35:53.601169    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:35:53.601181    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:35:56.130062    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:01.130447    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:01.130562    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:01.141862    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:01.141955    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:01.153233    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:01.153320    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:01.165460    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:01.165542    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:01.176664    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:01.176747    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:01.193254    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:01.193334    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:01.204605    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:01.204687    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:01.217444    9647 logs.go:282] 0 containers: []
	W1209 03:36:01.217460    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:01.217532    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:01.229057    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:01.229078    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:01.229085    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:01.254930    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:01.254943    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:01.268094    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:01.268107    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:01.282606    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:01.282621    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:01.319621    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:01.319637    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:01.334331    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:01.334345    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:01.347028    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:01.347040    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:01.366242    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:01.366254    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:01.382511    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:01.382523    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:01.408271    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:01.408291    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:01.449604    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:01.449623    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:01.468579    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:01.468591    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:01.473179    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:01.473186    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:01.487274    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:01.487289    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:01.502530    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:01.502543    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:01.515038    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:01.515050    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:01.530449    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:01.530463    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:04.045150    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:09.047458    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:09.047744    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:09.075754    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:09.075850    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:09.094994    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:09.095067    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:09.109014    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:09.109104    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:09.120937    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:09.121024    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:09.132371    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:09.132448    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:09.146984    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:09.147064    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:09.157028    9647 logs.go:282] 0 containers: []
	W1209 03:36:09.157040    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:09.157109    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:09.168853    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:09.168876    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:09.168884    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:09.173499    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:09.173511    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:09.188991    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:09.189005    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:09.204922    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:09.204943    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:09.220660    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:09.220675    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:09.232871    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:09.232885    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:09.273841    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:09.273859    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:09.286171    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:09.286186    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:09.310130    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:09.310142    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:09.345714    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:09.345729    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:09.364505    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:09.364520    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:09.376738    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:09.376751    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:09.389068    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:09.389081    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:09.408158    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:09.408172    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:09.434208    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:09.434220    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:09.447053    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:09.447064    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:09.473478    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:09.473490    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:11.989496    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:16.991684    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:16.991952    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:17.017971    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:17.018068    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:17.033170    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:17.033256    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:17.044055    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:17.044137    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:17.054847    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:17.054937    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:17.065303    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:17.065359    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:17.081046    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:17.081126    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:17.092110    9647 logs.go:282] 0 containers: []
	W1209 03:36:17.092125    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:17.092198    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:17.103859    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:17.103881    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:17.103887    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:17.108580    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:17.108595    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:17.134651    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:17.134665    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:17.147857    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:17.147871    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:17.162286    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:17.162302    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:17.174322    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:17.174335    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:17.187223    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:17.187234    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:17.206988    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:17.207003    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:17.219937    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:17.219948    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:17.236001    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:17.236011    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:17.248382    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:17.248390    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:17.287832    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:17.287854    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:17.326761    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:17.326773    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:17.354381    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:17.354398    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:17.372913    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:17.372927    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:17.388637    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:17.388650    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:17.403603    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:17.403614    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:19.920600    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:24.922922    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:24.923156    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:24.941853    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:24.941979    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:24.956257    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:24.956349    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:24.968530    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:24.968601    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:24.978852    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:24.978941    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:24.990120    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:24.990203    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:25.000424    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:25.000501    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:25.011281    9647 logs.go:282] 0 containers: []
	W1209 03:36:25.011292    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:25.011360    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:25.023391    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:25.023410    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:25.023415    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:25.061855    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:25.061869    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:25.077082    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:25.077098    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:25.102985    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:25.103006    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:25.117754    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:25.117768    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:25.133309    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:25.133321    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:25.148978    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:25.148989    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:25.161850    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:25.161862    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:25.174777    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:25.174792    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:25.187561    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:25.187573    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:25.230604    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:25.230618    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:25.242855    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:25.242866    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:25.261251    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:25.261266    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:25.276994    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:25.277009    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:25.281385    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:25.281395    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:25.297421    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:25.297433    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:25.314432    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:25.314444    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:27.840883    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:32.843093    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:32.843224    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:32.855170    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:32.855269    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:32.866170    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:32.866267    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:32.884795    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:32.884883    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:32.896012    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:32.896100    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:32.907162    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:32.907246    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:32.918338    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:32.918416    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:32.932128    9647 logs.go:282] 0 containers: []
	W1209 03:36:32.932143    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:32.932214    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:32.942394    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:32.942413    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:32.942419    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:32.954242    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:32.954257    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:32.969393    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:32.969404    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:32.983152    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:32.983164    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:32.996139    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:32.996153    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:33.012091    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:33.012103    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:33.038917    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:33.038927    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:33.051102    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:33.051111    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:33.078763    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:33.078777    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:33.118610    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:33.118622    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:33.146304    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:33.146315    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:33.186081    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:33.186095    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:33.200629    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:33.200641    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:33.214811    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:33.214825    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:33.227000    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:33.227014    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:33.239895    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:33.239906    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:33.265706    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:33.265718    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:35.772118    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:40.774898    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:40.775415    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:40.815606    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:40.815759    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:40.834961    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:40.835070    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:40.849125    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:40.849219    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:40.861711    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:40.861786    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:40.872422    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:40.872513    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:40.886764    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:40.886844    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:40.897740    9647 logs.go:282] 0 containers: []
	W1209 03:36:40.897757    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:40.897826    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:40.908641    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:40.908659    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:40.908665    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:40.920563    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:40.920577    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:40.946356    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:40.946375    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:40.973247    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:40.973266    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:40.990378    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:40.990393    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:41.002306    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:41.002315    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:41.020867    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:41.020882    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:41.036224    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:41.036237    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:41.049013    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:41.049022    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:41.087022    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:41.087037    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:41.091852    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:41.091863    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:41.107012    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:41.107023    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:41.121540    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:41.121551    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:41.137901    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:41.137910    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:41.153441    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:41.153453    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:41.173483    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:41.173496    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:41.186057    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:41.186069    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:43.729588    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:48.732037    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:48.732539    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:48.764110    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:48.764310    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:48.785993    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:48.786115    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:48.799402    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:48.799501    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:48.811672    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:48.811755    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:48.825547    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:48.825627    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:48.835722    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:48.835806    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:48.846195    9647 logs.go:282] 0 containers: []
	W1209 03:36:48.846206    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:48.846274    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:48.856510    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:48.856527    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:48.856533    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:48.870461    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:48.870475    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:48.884101    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:48.884111    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:48.903540    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:48.903551    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:48.916593    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:48.916611    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:48.940541    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:48.940559    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:48.952728    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:48.952740    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:48.967082    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:48.967095    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:49.006591    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:49.006610    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:49.011625    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:49.011638    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:49.057410    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:49.057421    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:49.069532    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:49.069547    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:49.084603    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:49.084616    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:49.110432    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:49.110451    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:49.122693    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:49.122707    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:49.135633    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:49.135644    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:49.152426    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:49.152439    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:51.673644    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:36:56.676057    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:36:56.676519    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:36:56.719410    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:36:56.719568    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:36:56.737197    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:36:56.737311    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:36:56.751540    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:36:56.751641    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:36:56.763793    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:36:56.763872    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:36:56.774533    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:36:56.774615    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:36:56.790051    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:36:56.790130    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:36:56.800174    9647 logs.go:282] 0 containers: []
	W1209 03:36:56.800187    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:36:56.800262    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:36:56.810509    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:36:56.810530    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:36:56.810536    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:36:56.824717    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:36:56.824730    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:36:56.856570    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:36:56.856582    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:36:56.875529    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:36:56.875539    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:36:56.890554    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:36:56.890566    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:36:56.907934    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:36:56.907947    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:36:56.920284    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:36:56.920296    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:36:56.933358    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:36:56.933369    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:36:56.948947    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:36:56.948965    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:36:56.960532    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:36:56.960543    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:36:56.973146    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:36:56.973158    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:36:56.997355    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:36:56.997372    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:36:57.039565    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:36:57.039589    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:36:57.044441    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:36:57.044451    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:36:57.082477    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:36:57.082490    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:36:57.098315    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:36:57.098328    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:36:57.117129    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:36:57.117141    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:36:59.635884    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:04.638424    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:04.638906    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:04.672263    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:04.672417    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:04.693656    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:04.693755    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:04.706291    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:04.706380    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:04.718626    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:04.718709    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:04.729721    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:04.729803    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:04.740930    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:04.741015    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:04.750937    9647 logs.go:282] 0 containers: []
	W1209 03:37:04.750948    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:04.751019    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:04.761760    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:04.761778    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:04.761784    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:04.774159    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:04.774173    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:04.815522    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:04.815535    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:04.831718    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:04.831731    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:04.858310    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:04.858329    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:04.882210    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:04.882223    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:04.887327    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:04.887338    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:04.905764    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:04.905776    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:04.920789    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:04.920805    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:04.939527    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:04.939540    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:04.951722    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:04.951733    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:04.964744    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:04.964756    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:04.977064    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:04.977075    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:04.997061    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:04.997071    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:05.014296    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:05.014306    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:05.055861    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:05.055874    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:05.071534    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:05.071545    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:07.586008    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:12.588301    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:12.588565    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:12.613211    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:12.613339    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:12.636980    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:12.637077    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:12.652573    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:12.652654    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:12.663732    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:12.663820    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:12.674537    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:12.674616    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:12.684799    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:12.684874    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:12.695858    9647 logs.go:282] 0 containers: []
	W1209 03:37:12.695870    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:12.695929    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:12.706573    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:12.706591    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:12.706601    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:12.718393    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:12.718406    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:12.729991    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:12.730002    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:12.742357    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:12.742369    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:12.755975    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:12.755988    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:12.774499    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:12.774514    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:12.797997    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:12.798009    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:12.839615    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:12.839638    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:12.876936    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:12.876949    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:12.889244    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:12.889257    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:12.905173    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:12.905181    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:12.921011    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:12.921023    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:12.945385    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:12.945402    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:12.973053    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:12.973062    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:12.989336    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:12.989348    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:13.001440    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:13.001453    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:13.006088    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:13.006096    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:15.523358    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:20.525565    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:20.525909    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:20.557338    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:20.557477    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:20.574203    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:20.574361    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:20.586854    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:20.586945    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:20.601522    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:20.601602    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:20.611802    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:20.611887    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:20.622057    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:20.622136    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:20.640643    9647 logs.go:282] 0 containers: []
	W1209 03:37:20.640653    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:20.640717    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:20.650858    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:20.650875    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:20.650881    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:20.662733    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:20.662743    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:20.680714    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:20.680724    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:20.684906    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:20.684913    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:20.718900    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:20.718910    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:20.734703    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:20.734715    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:20.750923    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:20.750934    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:20.769993    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:20.770006    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:20.815652    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:20.815670    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:20.828501    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:20.828515    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:20.840791    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:20.840803    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:20.856779    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:20.856789    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:20.882757    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:20.882768    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:20.897657    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:20.897669    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:20.921916    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:20.921929    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:20.936380    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:20.936393    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:20.952174    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:20.952186    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:23.465567    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:28.467814    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:28.468002    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:28.481510    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:28.481603    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:28.500548    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:28.500624    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:28.510926    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:28.511002    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:28.522468    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:28.522548    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:28.533878    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:28.533962    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:28.545041    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:28.545115    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:28.554942    9647 logs.go:282] 0 containers: []
	W1209 03:37:28.554958    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:28.555025    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:28.566729    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:28.566749    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:28.566755    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:28.571373    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:28.571380    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:28.593481    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:28.593492    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:28.608711    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:28.608721    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:28.647642    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:28.647649    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:28.661253    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:28.661263    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:28.676920    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:28.676932    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:28.689419    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:28.689431    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:28.713582    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:28.713601    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:28.753028    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:28.753044    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:28.780221    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:28.780231    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:28.796219    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:28.796231    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:28.808627    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:28.808640    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:28.822322    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:28.822332    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:28.835371    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:28.835383    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:28.848685    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:28.848699    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:28.863745    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:28.863761    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:31.376811    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:36.379165    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:36.379659    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:36.428560    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:36.428670    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:36.464642    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:36.464743    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:36.481565    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:36.481643    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:36.496952    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:36.497039    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:36.507517    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:36.507600    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:36.518598    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:36.518678    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:36.530256    9647 logs.go:282] 0 containers: []
	W1209 03:37:36.530273    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:36.530346    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:36.541123    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:36.541144    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:36.541150    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:36.565206    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:36.565216    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:36.580641    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:36.580656    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:36.598828    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:36.598842    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:36.611401    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:36.611413    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:36.623407    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:36.623415    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:36.637109    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:36.637120    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:36.650355    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:36.650366    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:36.678197    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:36.678212    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:36.693983    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:36.693992    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:36.707784    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:36.707794    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:36.720936    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:36.720952    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:36.736362    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:36.736374    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:36.775989    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:36.776002    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:36.781617    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:36.781633    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:36.798399    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:36.798408    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:36.836270    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:36.836285    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:39.352288    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:44.354999    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:44.355469    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:44.386472    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:44.386623    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:44.405885    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:44.405978    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:44.420396    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:44.420488    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:44.431920    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:44.432000    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:44.442292    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:44.442370    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:44.452781    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:44.452865    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:44.471794    9647 logs.go:282] 0 containers: []
	W1209 03:37:44.471805    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:44.471874    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:44.483358    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:44.483376    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:44.483381    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:44.487511    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:44.487519    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:44.502019    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:44.502032    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:44.518216    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:44.518230    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:44.530356    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:44.530367    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:44.542086    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:44.542098    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:44.564698    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:44.564707    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:44.583886    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:44.583896    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:44.600003    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:44.600012    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:44.620299    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:44.620311    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:44.633156    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:44.633168    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:44.648856    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:44.648871    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:44.662363    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:44.662376    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:44.704056    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:44.704069    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:44.742294    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:44.742306    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:44.755176    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:44.755189    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:44.786808    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:44.786828    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:47.307272    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:37:52.309530    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:37:52.309826    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:37:52.335174    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:37:52.335318    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:37:52.352404    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:37:52.352508    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:37:52.366920    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:37:52.367016    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:37:52.381981    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:37:52.382064    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:37:52.393211    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:37:52.393291    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:37:52.404294    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:37:52.404373    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:37:52.414353    9647 logs.go:282] 0 containers: []
	W1209 03:37:52.414364    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:37:52.414431    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:37:52.424821    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:37:52.424841    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:37:52.424849    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:37:52.439711    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:37:52.439721    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:37:52.451185    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:37:52.451195    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:37:52.462584    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:37:52.462596    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:37:52.466982    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:37:52.466988    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:37:52.478449    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:37:52.478461    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:37:52.490766    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:37:52.490781    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:37:52.516452    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:37:52.516465    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:37:52.538898    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:37:52.538909    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:37:52.554434    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:37:52.554445    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:37:52.570919    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:37:52.570932    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:37:52.586681    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:37:52.586692    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:37:52.611155    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:37:52.611171    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:37:52.648534    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:37:52.648551    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:37:52.663802    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:37:52.663814    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:37:52.685994    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:37:52.686006    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:37:52.726432    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:37:52.726454    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:37:55.242220    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:00.245024    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:00.245626    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:38:00.301221    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:38:00.301345    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:38:00.317512    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:38:00.317608    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:38:00.337109    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:38:00.337194    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:38:00.347721    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:38:00.347809    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:38:00.358570    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:38:00.358645    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:38:00.369293    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:38:00.369373    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:38:00.380064    9647 logs.go:282] 0 containers: []
	W1209 03:38:00.380076    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:38:00.380152    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:38:00.391234    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:38:00.391254    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:38:00.391260    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:38:00.405306    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:38:00.405318    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:38:00.430563    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:38:00.430575    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:38:00.442080    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:38:00.442091    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:38:00.454079    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:38:00.454091    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:38:00.471508    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:38:00.471519    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:38:00.482873    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:38:00.482883    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:38:00.524277    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:38:00.524293    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:38:00.528973    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:38:00.528985    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:38:00.544036    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:38:00.544049    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:38:00.559722    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:38:00.559734    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:38:00.584518    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:38:00.584538    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:38:00.611572    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:38:00.611585    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:38:00.661580    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:38:00.661591    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:38:00.675677    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:38:00.675689    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:38:00.687651    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:38:00.687663    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:38:00.707766    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:38:00.707778    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:38:03.224766    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:08.226988    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:08.227268    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:38:08.249689    9647 logs.go:282] 2 containers: [2ff88a4a7e59 8e04376e2372]
	I1209 03:38:08.249835    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:38:08.265302    9647 logs.go:282] 2 containers: [e31f68b47255 5b19c97e6b50]
	I1209 03:38:08.265409    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:38:08.281037    9647 logs.go:282] 1 containers: [230bb2ce39ec]
	I1209 03:38:08.281122    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:38:08.291501    9647 logs.go:282] 2 containers: [5d64725d9a34 a572daa6beda]
	I1209 03:38:08.291576    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:38:08.307243    9647 logs.go:282] 1 containers: [e024ae95bbbd]
	I1209 03:38:08.307314    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:38:08.318353    9647 logs.go:282] 2 containers: [ea93ea7b5664 5302a3675333]
	I1209 03:38:08.318425    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:38:08.330670    9647 logs.go:282] 0 containers: []
	W1209 03:38:08.330682    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:38:08.330749    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:38:08.341640    9647 logs.go:282] 2 containers: [ecba098e3a62 9768dd58455e]
	I1209 03:38:08.341658    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:38:08.341664    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:38:08.346396    9647 logs.go:123] Gathering logs for kube-apiserver [2ff88a4a7e59] ...
	I1209 03:38:08.346405    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff88a4a7e59"
	I1209 03:38:08.360612    9647 logs.go:123] Gathering logs for storage-provisioner [9768dd58455e] ...
	I1209 03:38:08.360623    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9768dd58455e"
	I1209 03:38:08.371765    9647 logs.go:123] Gathering logs for coredns [230bb2ce39ec] ...
	I1209 03:38:08.371775    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 230bb2ce39ec"
	I1209 03:38:08.383243    9647 logs.go:123] Gathering logs for kube-scheduler [5d64725d9a34] ...
	I1209 03:38:08.383253    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d64725d9a34"
	I1209 03:38:08.395054    9647 logs.go:123] Gathering logs for storage-provisioner [ecba098e3a62] ...
	I1209 03:38:08.395067    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecba098e3a62"
	I1209 03:38:08.406856    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:38:08.406867    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:38:08.429503    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:38:08.429517    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:38:08.443618    9647 logs.go:123] Gathering logs for etcd [e31f68b47255] ...
	I1209 03:38:08.443628    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e31f68b47255"
	I1209 03:38:08.458186    9647 logs.go:123] Gathering logs for kube-controller-manager [ea93ea7b5664] ...
	I1209 03:38:08.458201    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea93ea7b5664"
	I1209 03:38:08.476755    9647 logs.go:123] Gathering logs for kube-controller-manager [5302a3675333] ...
	I1209 03:38:08.476769    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5302a3675333"
	I1209 03:38:08.492209    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:38:08.492223    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 03:38:08.533754    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:38:08.533770    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:38:08.572603    9647 logs.go:123] Gathering logs for kube-apiserver [8e04376e2372] ...
	I1209 03:38:08.572615    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e04376e2372"
	I1209 03:38:08.599384    9647 logs.go:123] Gathering logs for etcd [5b19c97e6b50] ...
	I1209 03:38:08.599399    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b19c97e6b50"
	I1209 03:38:08.615292    9647 logs.go:123] Gathering logs for kube-scheduler [a572daa6beda] ...
	I1209 03:38:08.615309    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a572daa6beda"
	I1209 03:38:08.631593    9647 logs.go:123] Gathering logs for kube-proxy [e024ae95bbbd] ...
	I1209 03:38:08.631606    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e024ae95bbbd"
	I1209 03:38:11.149993    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:16.152540    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:16.152616    9647 kubeadm.go:597] duration metric: took 4m4.284523041s to restartPrimaryControlPlane
	W1209 03:38:16.152659    9647 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 03:38:16.152687    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1209 03:38:17.227231    9647 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0745515s)
	I1209 03:38:17.227321    9647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 03:38:17.232121    9647 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 03:38:17.234925    9647 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 03:38:17.237993    9647 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 03:38:17.237999    9647 kubeadm.go:157] found existing configuration files:
	
	I1209 03:38:17.238027    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/admin.conf
	I1209 03:38:17.240960    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 03:38:17.240994    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 03:38:17.243636    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/kubelet.conf
	I1209 03:38:17.246017    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 03:38:17.246052    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 03:38:17.249139    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/controller-manager.conf
	I1209 03:38:17.251732    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 03:38:17.251761    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 03:38:17.254250    9647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/scheduler.conf
	I1209 03:38:17.257203    9647 kubeadm.go:163] "https://control-plane.minikube.internal:60521" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:60521 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 03:38:17.257233    9647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 03:38:17.259927    9647 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 03:38:17.277621    9647 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1209 03:38:17.277659    9647 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 03:38:17.324983    9647 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 03:38:17.325045    9647 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 03:38:17.325102    9647 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 03:38:17.379615    9647 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 03:38:17.384823    9647 out.go:235]   - Generating certificates and keys ...
	I1209 03:38:17.384862    9647 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 03:38:17.384898    9647 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 03:38:17.384951    9647 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 03:38:17.384987    9647 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 03:38:17.385036    9647 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 03:38:17.385071    9647 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 03:38:17.385127    9647 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 03:38:17.385162    9647 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 03:38:17.385206    9647 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 03:38:17.385278    9647 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 03:38:17.385303    9647 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 03:38:17.385338    9647 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 03:38:17.565063    9647 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 03:38:17.660313    9647 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 03:38:17.719712    9647 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 03:38:18.081137    9647 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 03:38:18.110125    9647 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 03:38:18.110524    9647 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 03:38:18.110556    9647 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 03:38:18.198241    9647 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 03:38:18.201465    9647 out.go:235]   - Booting up control plane ...
	I1209 03:38:18.201511    9647 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 03:38:18.201555    9647 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 03:38:18.201605    9647 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 03:38:18.201642    9647 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 03:38:18.201738    9647 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 03:38:22.703716    9647 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502250 seconds
	I1209 03:38:22.703786    9647 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 03:38:22.707214    9647 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 03:38:23.221832    9647 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 03:38:23.222163    9647 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-416000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 03:38:23.726610    9647 kubeadm.go:310] [bootstrap-token] Using token: ilakkd.dsphbr8h9ubfikit
	I1209 03:38:23.732757    9647 out.go:235]   - Configuring RBAC rules ...
	I1209 03:38:23.732812    9647 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 03:38:23.732855    9647 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 03:38:23.738173    9647 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 03:38:23.738907    9647 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 03:38:23.739595    9647 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 03:38:23.740329    9647 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 03:38:23.743188    9647 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 03:38:23.938940    9647 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 03:38:24.129800    9647 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 03:38:24.130317    9647 kubeadm.go:310] 
	I1209 03:38:24.130346    9647 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 03:38:24.130388    9647 kubeadm.go:310] 
	I1209 03:38:24.130429    9647 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 03:38:24.130435    9647 kubeadm.go:310] 
	I1209 03:38:24.130482    9647 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 03:38:24.130527    9647 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 03:38:24.130556    9647 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 03:38:24.130583    9647 kubeadm.go:310] 
	I1209 03:38:24.130627    9647 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 03:38:24.130629    9647 kubeadm.go:310] 
	I1209 03:38:24.130670    9647 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 03:38:24.130673    9647 kubeadm.go:310] 
	I1209 03:38:24.130721    9647 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 03:38:24.130763    9647 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 03:38:24.130833    9647 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 03:38:24.130836    9647 kubeadm.go:310] 
	I1209 03:38:24.130882    9647 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 03:38:24.130937    9647 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 03:38:24.130942    9647 kubeadm.go:310] 
	I1209 03:38:24.130984    9647 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ilakkd.dsphbr8h9ubfikit \
	I1209 03:38:24.131033    9647 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a782ae2dfd662275a1e7aa9644899b689ef07f45552176e7bc27057154b9dd4 \
	I1209 03:38:24.131042    9647 kubeadm.go:310] 	--control-plane 
	I1209 03:38:24.131044    9647 kubeadm.go:310] 
	I1209 03:38:24.131114    9647 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 03:38:24.131119    9647 kubeadm.go:310] 
	I1209 03:38:24.131161    9647 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ilakkd.dsphbr8h9ubfikit \
	I1209 03:38:24.131265    9647 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4a782ae2dfd662275a1e7aa9644899b689ef07f45552176e7bc27057154b9dd4 
	I1209 03:38:24.131321    9647 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 03:38:24.131331    9647 cni.go:84] Creating CNI manager for ""
	I1209 03:38:24.131344    9647 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:38:24.135791    9647 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 03:38:24.142784    9647 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 03:38:24.145945    9647 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 03:38:24.150933    9647 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 03:38:24.151008    9647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 03:38:24.151226    9647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-416000 minikube.k8s.io/updated_at=2024_12_09T03_38_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c minikube.k8s.io/name=stopped-upgrade-416000 minikube.k8s.io/primary=true
	I1209 03:38:24.182745    9647 kubeadm.go:1113] duration metric: took 31.797625ms to wait for elevateKubeSystemPrivileges
	I1209 03:38:24.182785    9647 ops.go:34] apiserver oom_adj: -16
	I1209 03:38:24.189961    9647 kubeadm.go:394] duration metric: took 4m12.335566458s to StartCluster
	I1209 03:38:24.189980    9647 settings.go:142] acquiring lock: {Name:mk9d239bb773df077cf7eb12290ff1e68f296c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:38:24.190158    9647 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:38:24.190545    9647 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/kubeconfig: {Name:mkcab6edf2c02dd56919c96ee93c72d0b668d23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:38:24.191071    9647 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:38:24.191070    9647 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 03:38:24.191120    9647 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-416000"
	I1209 03:38:24.191129    9647 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-416000"
	W1209 03:38:24.191132    9647 addons.go:243] addon storage-provisioner should already be in state true
	I1209 03:38:24.191142    9647 host.go:66] Checking if "stopped-upgrade-416000" exists ...
	I1209 03:38:24.191155    9647 config.go:182] Loaded profile config "stopped-upgrade-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 03:38:24.191160    9647 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-416000"
	I1209 03:38:24.191324    9647 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-416000"
	I1209 03:38:24.192280    9647 retry.go:31] will retry after 1.258603263s: connect: dial unix /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/monitor: connect: connection refused
	I1209 03:38:24.192964    9647 kapi.go:59] client config for stopped-upgrade-416000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/stopped-upgrade-416000/client.key", CAFile:"/Users/jenkins/minikube-integration/20068-6536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102bcb740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 03:38:24.193267    9647 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-416000"
	W1209 03:38:24.193271    9647 addons.go:243] addon default-storageclass should already be in state true
	I1209 03:38:24.193278    9647 host.go:66] Checking if "stopped-upgrade-416000" exists ...
	I1209 03:38:24.193781    9647 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 03:38:24.193785    9647 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 03:38:24.193790    9647 sshutil.go:53] new ssh client: &{IP:localhost Port:60489 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/id_rsa Username:docker}
	I1209 03:38:24.195791    9647 out.go:177] * Verifying Kubernetes components...
	I1209 03:38:24.203782    9647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 03:38:24.292997    9647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 03:38:24.298205    9647 api_server.go:52] waiting for apiserver process to appear ...
	I1209 03:38:24.298263    9647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 03:38:24.302451    9647 api_server.go:72] duration metric: took 111.368958ms to wait for apiserver process to appear ...
	I1209 03:38:24.302461    9647 api_server.go:88] waiting for apiserver healthz status ...
	I1209 03:38:24.302467    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:24.362822    9647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 03:38:24.686399    9647 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 03:38:24.686409    9647 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 03:38:25.458307    9647 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 03:38:25.462224    9647 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 03:38:25.462239    9647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 03:38:25.462254    9647 sshutil.go:53] new ssh client: &{IP:localhost Port:60489 SSHKeyPath:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/stopped-upgrade-416000/id_rsa Username:docker}
	I1209 03:38:25.510574    9647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 03:38:29.304487    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:29.304537    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:34.305065    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:34.305086    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:39.305386    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:39.305408    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:44.305849    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:44.305901    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:49.306616    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:49.306651    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:38:54.307535    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:54.307588    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1209 03:38:54.688931    9647 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1209 03:38:54.692988    9647 out.go:177] * Enabled addons: storage-provisioner
	I1209 03:38:54.699939    9647 addons.go:510] duration metric: took 30.509574s for enable addons: enabled=[storage-provisioner]
	I1209 03:38:59.308675    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:38:59.308727    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:04.310275    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:04.310317    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:09.312244    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:09.312301    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:14.314458    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:14.314481    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:19.314814    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:19.314845    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:24.317010    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:24.317169    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:39:24.345135    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:39:24.345230    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:39:24.357615    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:39:24.357696    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:39:24.368392    9647 logs.go:282] 2 containers: [8119ba0a4b38 f980d379a2f6]
	I1209 03:39:24.368474    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:39:24.378607    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:39:24.378681    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:39:24.389130    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:39:24.389216    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:39:24.399433    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:39:24.399527    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:39:24.409668    9647 logs.go:282] 0 containers: []
	W1209 03:39:24.409678    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:39:24.409739    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:39:24.420337    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:39:24.420352    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:39:24.420357    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:39:24.425276    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:39:24.425283    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:39:24.439849    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:39:24.439863    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:39:24.457730    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:39:24.457741    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:39:24.469007    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:39:24.469017    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:39:24.505683    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:39:24.505783    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:39:24.507551    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:39:24.507557    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:39:24.549731    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:39:24.549743    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:39:24.564254    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:39:24.564267    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:39:24.579727    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:39:24.579741    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:39:24.592101    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:39:24.592111    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:39:24.607340    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:39:24.607350    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:39:24.619640    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:39:24.619653    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:39:24.643743    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:39:24.643761    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:39:24.655681    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:39:24.655704    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:39:24.655730    9647 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 03:39:24.655736    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:39:24.655739    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:39:24.655742    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:39:24.655745    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:39:34.658081    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:39.660406    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:39.660637    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:39:39.675388    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:39:39.675490    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:39:39.686852    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:39:39.686920    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:39:39.697271    9647 logs.go:282] 2 containers: [8119ba0a4b38 f980d379a2f6]
	I1209 03:39:39.697338    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:39:39.708371    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:39:39.708454    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:39:39.719137    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:39:39.719221    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:39:39.730750    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:39:39.730829    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:39:39.741850    9647 logs.go:282] 0 containers: []
	W1209 03:39:39.741863    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:39:39.741935    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:39:39.752382    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:39:39.752398    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:39:39.752404    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:39:39.771286    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:39:39.771302    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:39:39.783126    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:39:39.783135    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:39:39.797828    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:39:39.797838    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:39:39.812101    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:39:39.812113    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:39:39.835762    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:39:39.835772    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:39:39.840170    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:39:39.840179    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:39:39.855020    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:39:39.855032    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:39:39.869747    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:39:39.869757    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:39:39.886986    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:39:39.886997    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:39:39.898656    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:39:39.898666    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:39:39.934966    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:39:39.935061    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:39:39.936886    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:39:39.936892    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:39:39.973991    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:39:39.974004    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:39:39.989168    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:39:39.989178    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:39:39.989205    9647 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 03:39:39.989209    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:39:39.989213    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:39:39.989216    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:39:39.989219    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:39:49.993143    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:39:54.993738    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:39:54.994034    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:39:55.016330    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:39:55.016461    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:39:55.032400    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:39:55.032497    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:39:55.045264    9647 logs.go:282] 2 containers: [8119ba0a4b38 f980d379a2f6]
	I1209 03:39:55.045343    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:39:55.056465    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:39:55.056546    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:39:55.067001    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:39:55.067080    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:39:55.077788    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:39:55.077866    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:39:55.087656    9647 logs.go:282] 0 containers: []
	W1209 03:39:55.087668    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:39:55.087730    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:39:55.097959    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:39:55.097976    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:39:55.097982    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:39:55.112448    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:39:55.112460    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:39:55.125986    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:39:55.125999    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:39:55.138573    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:39:55.138585    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:39:55.143162    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:39:55.143170    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:39:55.158059    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:39:55.158072    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:39:55.175618    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:39:55.175628    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:39:55.187140    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:39:55.187151    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:39:55.202845    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:39:55.202859    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:39:55.227239    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:39:55.227246    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:39:55.263762    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:39:55.263857    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:39:55.265679    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:39:55.265684    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:39:55.310492    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:39:55.310502    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:39:55.322794    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:39:55.322806    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:39:55.341226    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:39:55.341236    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:39:55.341266    9647 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 03:39:55.341281    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:39:55.341285    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:39:55.341288    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:39:55.341291    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:40:05.345216    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:10.347327    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:10.347562    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:10.367783    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:40:10.367884    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:10.382277    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:40:10.382376    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:10.394264    9647 logs.go:282] 2 containers: [8119ba0a4b38 f980d379a2f6]
	I1209 03:40:10.394346    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:10.405552    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:40:10.405638    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:10.416257    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:40:10.416335    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:10.433429    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:40:10.433510    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:10.443923    9647 logs.go:282] 0 containers: []
	W1209 03:40:10.443943    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:10.444022    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:10.454262    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:40:10.454279    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:10.454285    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:40:10.490525    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:10.490622    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:10.492348    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:10.492354    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:10.529279    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:40:10.529294    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:40:10.543367    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:40:10.543378    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:40:10.554995    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:40:10.555004    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:40:10.570036    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:10.570051    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:10.593911    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:40:10.593922    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:10.607117    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:10.607131    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:10.611658    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:40:10.611667    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:40:10.626011    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:40:10.626024    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:40:10.637579    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:40:10.637595    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:40:10.649506    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:40:10.649516    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:40:10.667409    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:40:10.667421    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:40:10.678901    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:10.678911    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:40:10.678937    9647 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 03:40:10.678941    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:10.678944    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:10.678947    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:10.678950    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:40:20.682951    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:25.685253    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:25.685512    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:25.712516    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:40:25.712610    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:25.727705    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:40:25.727788    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:25.738512    9647 logs.go:282] 2 containers: [8119ba0a4b38 f980d379a2f6]
	I1209 03:40:25.738597    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:25.750297    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:40:25.750376    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:25.762825    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:40:25.762903    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:25.773633    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:40:25.773712    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:25.787898    9647 logs.go:282] 0 containers: []
	W1209 03:40:25.787910    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:25.787978    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:25.801406    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:40:25.801422    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:40:25.801427    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:40:25.846096    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:40:25.846109    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:40:25.857813    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:40:25.857826    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:40:25.877499    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:40:25.877512    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:40:25.895869    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:40:25.895879    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:40:25.907561    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:25.907573    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:40:25.944907    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:25.945009    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:25.946775    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:25.946782    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:25.951040    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:40:25.951047    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:40:25.965688    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:25.965699    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:25.988594    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:40:25.988602    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:25.999606    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:25.999617    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:26.038889    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:40:26.038899    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:40:26.053235    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:40:26.053245    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:40:26.065850    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:26.065862    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:40:26.065891    9647 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 03:40:26.065900    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:26.065903    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:26.065907    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:26.065912    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:40:36.069892    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:41.072179    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:41.072450    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:41.096138    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:40:41.096262    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:41.112152    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:40:41.112247    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:41.125489    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:40:41.125579    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:41.136747    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:40:41.136827    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:41.146725    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:40:41.146797    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:41.157659    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:40:41.157740    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:41.167763    9647 logs.go:282] 0 containers: []
	W1209 03:40:41.167775    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:41.167850    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:41.178154    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:40:41.178171    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:40:41.178177    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:40:41.189768    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:40:41.189782    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:40:41.204160    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:40:41.204171    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:40:41.218346    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:40:41.218359    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:40:41.240032    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:41.240044    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:41.266167    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:40:41.266177    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:41.277443    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:41.277453    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:41.281475    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:41.281484    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:41.318807    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:40:41.318819    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:40:41.334860    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:40:41.334875    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:40:41.346628    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:40:41.346640    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:40:41.358745    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:41.358760    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:40:41.393746    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:41.393840    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:41.395563    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:40:41.395568    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:40:41.406596    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:40:41.406610    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:40:41.418007    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:40:41.418020    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:40:41.432991    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:41.433003    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:40:41.433027    9647 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 03:40:41.433031    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:41.433034    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:41.433037    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:41.433040    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:40:51.437092    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:40:56.439628    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:40:56.439854    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:40:56.458437    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:40:56.458551    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:40:56.472641    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:40:56.472732    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:40:56.485319    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:40:56.485409    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:40:56.495878    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:40:56.495963    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:40:56.506725    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:40:56.506795    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:40:56.517362    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:40:56.517439    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:40:56.528227    9647 logs.go:282] 0 containers: []
	W1209 03:40:56.528241    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:40:56.528313    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:40:56.538892    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:40:56.538908    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:40:56.538915    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:40:56.574576    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:56.574675    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:56.576396    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:40:56.576404    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:40:56.580846    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:40:56.580853    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:40:56.592440    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:40:56.592454    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:40:56.607114    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:40:56.607127    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:40:56.633156    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:40:56.633169    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:40:56.645218    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:40:56.645228    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:40:56.659596    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:40:56.659607    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:40:56.671320    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:40:56.671332    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:40:56.687429    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:40:56.687439    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:40:56.703220    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:40:56.703231    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:40:56.724546    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:40:56.724560    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:40:56.738961    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:40:56.738972    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:40:56.750873    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:40:56.750884    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:40:56.787062    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:40:56.787072    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:40:56.798798    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:56.798808    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:40:56.798833    9647 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 03:40:56.798837    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:40:56.798842    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:40:56.798852    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:40:56.798855    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:41:06.801493    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:11.801646    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:11.801863    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:11.814987    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:41:11.815067    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:11.826474    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:41:11.826553    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:11.837599    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:41:11.837674    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:11.847931    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:41:11.848013    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:11.863123    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:41:11.863200    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:11.877802    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:41:11.877874    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:11.888058    9647 logs.go:282] 0 containers: []
	W1209 03:41:11.888070    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:11.888136    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:11.898493    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:41:11.898510    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:41:11.898515    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:11.917680    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:41:11.917693    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:41:11.931551    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:41:11.931564    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:41:11.943032    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:41:11.943046    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:41:11.957727    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:41:11.957742    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:41:11.974778    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:11.974788    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:11.999149    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:41:11.999158    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:41:12.018150    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:41:12.018164    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:41:12.030152    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:41:12.030163    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:41:12.042034    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:41:12.042047    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:41:12.060015    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:41:12.060027    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:41:12.071663    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:12.071677    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:41:12.108347    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:12.108447    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:12.110219    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:12.110225    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:12.115276    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:12.115284    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:12.155088    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:41:12.155099    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:41:12.167797    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:12.167812    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:41:12.167841    9647 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 03:41:12.167847    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:12.167852    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:12.167872    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:12.167877    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:41:22.171792    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:27.173888    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:27.174021    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:27.185881    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:41:27.185969    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:27.196698    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:41:27.196771    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:27.207078    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:41:27.207159    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:27.217665    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:41:27.217733    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:27.228097    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:41:27.228164    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:27.242732    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:41:27.242814    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:27.252899    9647 logs.go:282] 0 containers: []
	W1209 03:41:27.252911    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:27.252982    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:27.263366    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:41:27.263382    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:41:27.263388    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:27.275346    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:27.275358    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:27.280171    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:27.280180    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:27.313531    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:41:27.313542    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:41:27.324944    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:41:27.324956    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:41:27.336701    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:41:27.336711    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:41:27.347828    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:27.347842    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:27.372184    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:41:27.372196    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:41:27.384734    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:41:27.384745    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:41:27.399179    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:27.399191    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:41:27.436239    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:27.436332    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:27.438059    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:41:27.438063    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:41:27.452096    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:41:27.452108    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:41:27.463610    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:41:27.463621    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:41:27.478554    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:41:27.478564    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:41:27.490336    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:41:27.490345    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:41:27.507557    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:27.507567    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:41:27.507588    9647 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 03:41:27.507592    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:27.507596    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:27.507618    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:27.507624    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:41:37.511592    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:42.513849    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:42.514152    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:42.539617    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:41:42.539761    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:42.556895    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:41:42.557009    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:42.570222    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:41:42.570311    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:42.581439    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:41:42.581518    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:42.612150    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:41:42.612234    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:42.626716    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:41:42.626793    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:42.637185    9647 logs.go:282] 0 containers: []
	W1209 03:41:42.637197    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:42.637264    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:42.647377    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:41:42.647397    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:42.647403    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:42.652079    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:41:42.652091    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:41:42.664040    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:41:42.664052    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:41:42.676208    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:41:42.676219    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:41:42.688022    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:41:42.688033    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:41:42.711739    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:41:42.711750    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:41:42.723399    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:42.723413    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:41:42.758020    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:42.758115    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:42.759939    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:41:42.759945    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:41:42.774012    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:41:42.774024    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:41:42.786600    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:42.786614    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:42.811937    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:42.811946    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:42.846300    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:41:42.846311    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:42.858652    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:41:42.858665    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:41:42.874328    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:41:42.874342    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:41:42.889380    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:41:42.889391    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:41:42.908313    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:42.908323    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:41:42.908350    9647 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 03:41:42.908353    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:42.908357    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:42.908361    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:42.908364    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:41:52.912269    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:41:57.914507    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:41:57.914743    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:41:57.937587    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:41:57.937725    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:41:57.954540    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:41:57.954637    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:41:57.967631    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:41:57.967722    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:41:57.980192    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:41:57.980268    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:41:57.990760    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:41:57.990835    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:41:58.001786    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:41:58.001858    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:41:58.019022    9647 logs.go:282] 0 containers: []
	W1209 03:41:58.019036    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:41:58.019099    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:41:58.029667    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:41:58.029685    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:41:58.029691    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:41:58.041436    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:41:58.041448    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:41:58.057188    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:41:58.057197    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:41:58.074611    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:41:58.074622    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:41:58.079495    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:41:58.079501    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:41:58.090907    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:41:58.090918    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:41:58.102577    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:41:58.102587    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:41:58.140142    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:58.140241    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:58.142047    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:41:58.142057    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:41:58.155539    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:41:58.155550    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:41:58.169467    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:41:58.169480    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:41:58.181424    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:41:58.181437    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:41:58.196255    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:41:58.196266    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:41:58.207987    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:41:58.208001    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:41:58.242659    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:41:58.242671    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:41:58.257689    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:41:58.257702    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:41:58.282003    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:58.282013    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:41:58.282036    9647 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 03:41:58.282040    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:41:58.282043    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:41:58.282049    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:41:58.282051    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:42:08.285440    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:13.287705    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:13.287896    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 03:42:13.301952    9647 logs.go:282] 1 containers: [891374f521db]
	I1209 03:42:13.302046    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 03:42:13.313344    9647 logs.go:282] 1 containers: [046faa0fdb82]
	I1209 03:42:13.313420    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 03:42:13.323935    9647 logs.go:282] 4 containers: [825aa5744cd7 15504e3f9248 8119ba0a4b38 f980d379a2f6]
	I1209 03:42:13.324016    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 03:42:13.334690    9647 logs.go:282] 1 containers: [8236d7bacdaf]
	I1209 03:42:13.334763    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 03:42:13.346288    9647 logs.go:282] 1 containers: [49229f516531]
	I1209 03:42:13.346364    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 03:42:13.356617    9647 logs.go:282] 1 containers: [2e357d126efa]
	I1209 03:42:13.356695    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 03:42:13.366925    9647 logs.go:282] 0 containers: []
	W1209 03:42:13.366939    9647 logs.go:284] No container was found matching "kindnet"
	I1209 03:42:13.367001    9647 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 03:42:13.381744    9647 logs.go:282] 1 containers: [7ee3a4efa8d3]
	I1209 03:42:13.381765    9647 logs.go:123] Gathering logs for kubelet ...
	I1209 03:42:13.381793    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 03:42:13.418081    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:42:13.418175    9647 logs.go:138] Found kubelet problem: Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:42:13.419885    9647 logs.go:123] Gathering logs for dmesg ...
	I1209 03:42:13.419890    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 03:42:13.423839    9647 logs.go:123] Gathering logs for coredns [f980d379a2f6] ...
	I1209 03:42:13.423847    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f980d379a2f6"
	I1209 03:42:13.447655    9647 logs.go:123] Gathering logs for coredns [15504e3f9248] ...
	I1209 03:42:13.447672    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15504e3f9248"
	I1209 03:42:13.465274    9647 logs.go:123] Gathering logs for kube-scheduler [8236d7bacdaf] ...
	I1209 03:42:13.465284    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8236d7bacdaf"
	I1209 03:42:13.485750    9647 logs.go:123] Gathering logs for kube-proxy [49229f516531] ...
	I1209 03:42:13.485762    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49229f516531"
	I1209 03:42:13.497525    9647 logs.go:123] Gathering logs for describe nodes ...
	I1209 03:42:13.497537    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 03:42:13.534406    9647 logs.go:123] Gathering logs for kube-apiserver [891374f521db] ...
	I1209 03:42:13.534419    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891374f521db"
	I1209 03:42:13.549345    9647 logs.go:123] Gathering logs for etcd [046faa0fdb82] ...
	I1209 03:42:13.549358    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 046faa0fdb82"
	I1209 03:42:13.567132    9647 logs.go:123] Gathering logs for kube-controller-manager [2e357d126efa] ...
	I1209 03:42:13.567143    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e357d126efa"
	I1209 03:42:13.584774    9647 logs.go:123] Gathering logs for storage-provisioner [7ee3a4efa8d3] ...
	I1209 03:42:13.584788    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ee3a4efa8d3"
	I1209 03:42:13.596202    9647 logs.go:123] Gathering logs for Docker ...
	I1209 03:42:13.596212    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 03:42:13.622375    9647 logs.go:123] Gathering logs for coredns [825aa5744cd7] ...
	I1209 03:42:13.622384    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 825aa5744cd7"
	I1209 03:42:13.634945    9647 logs.go:123] Gathering logs for coredns [8119ba0a4b38] ...
	I1209 03:42:13.634958    9647 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8119ba0a4b38"
	I1209 03:42:13.647275    9647 logs.go:123] Gathering logs for container status ...
	I1209 03:42:13.647290    9647 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 03:42:13.666793    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:42:13.666806    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 03:42:13.666833    9647 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 03:42:13.666837    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: W1209 11:38:37.904997   10474 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	W1209 03:42:13.666840    9647 out.go:270]   Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	  Dec 09 11:38:37 stopped-upgrade-416000 kubelet[10474]: E1209 11:38:37.905030   10474 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-416000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-416000' and this object
	I1209 03:42:13.666844    9647 out.go:358] Setting ErrFile to fd 2...
	I1209 03:42:13.666846    9647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:42:23.670827    9647 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 03:42:28.671571    9647 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 03:42:28.674759    9647 out.go:201] 
	W1209 03:42:28.678767    9647 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1209 03:42:28.678776    9647 out.go:270] * 
	* 
	W1209 03:42:28.679763    9647 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:42:28.691818    9647 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-416000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (593.24s)

                                                
                                    
x
+
TestPause/serial/Start (10.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-769000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-769000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.00005725s)

                                                
                                                
-- stdout --
	* [pause-769000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-769000" primary control-plane node in "pause-769000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-769000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-769000 -n pause-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-769000 -n pause-769000: exit status 7 (70.07075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-769000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-797000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-797000 --driver=qemu2 : exit status 80 (9.788677s)

                                                
                                                
-- stdout --
	* [NoKubernetes-797000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-797000" primary control-plane node in "NoKubernetes-797000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-797000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-797000 -n NoKubernetes-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-797000 -n NoKubernetes-797000: exit status 7 (72.972792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.96s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20068
- KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current612559878/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.96s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.44s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20068
- KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2146370619/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-797000 --no-kubernetes --driver=qemu2 
I1209 03:43:24.167367    7820 install.go:79] stdout: 
W1209 03:43:24.167584    7820 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1209 03:43:24.167614    7820 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/001/docker-machine-driver-hyperkit]
I1209 03:43:24.183883    7820 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/001/docker-machine-driver-hyperkit]
I1209 03:43:24.196212    7820 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/001/docker-machine-driver-hyperkit]
I1209 03:43:24.207404    7820 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/001/docker-machine-driver-hyperkit]
I1209 03:43:24.228891    7820 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 03:43:24.229006    7820 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1209 03:43:26.030369    7820 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1209 03:43:26.030393    7820 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1209 03:43:26.030440    7820 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1209 03:43:26.030476    7820 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/002/docker-machine-driver-hyperkit
I1209 03:43:26.420584    7820 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x108df96e0 0x108df96e0 0x108df96e0 0x108df96e0 0x108df96e0 0x108df96e0 0x108df96e0] Decompressors:map[bz2:0x14000522ce0 gz:0x14000522ce8 tar:0x14000522c70 tar.bz2:0x14000522c90 tar.gz:0x14000522ca0 tar.xz:0x14000522cb0 tar.zst:0x14000522cd0 tbz2:0x14000522c90 tgz:0x14000522ca0 txz:0x14000522cb0 tzst:0x14000522cd0 xz:0x14000522d10 zip:0x14000522d20 zst:0x14000522d18] Getters:map[file:0x14001533340 http:0x14000116b40 https:0x14000116b90] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1209 03:43:26.420701    7820 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/002/docker-machine-driver-hyperkit
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-797000 --no-kubernetes --driver=qemu2 : exit status 80 (5.266007458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-797000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-797000
	* Restarting existing qemu2 VM for "NoKubernetes-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-797000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-797000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-797000 -n NoKubernetes-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-797000 -n NoKubernetes-797000: exit status 7 (73.885042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-797000 --no-kubernetes --driver=qemu2 
I1209 03:43:29.380304    7820 install.go:79] stdout: 
W1209 03:43:29.380485    7820 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1209 03:43:29.380517    7820 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/002/docker-machine-driver-hyperkit]
I1209 03:43:29.396957    7820 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/002/docker-machine-driver-hyperkit]
I1209 03:43:29.410414    7820 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/002/docker-machine-driver-hyperkit]
I1209 03:43:29.421082    7820 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/002/docker-machine-driver-hyperkit]
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-797000 --no-kubernetes --driver=qemu2 : exit status 80 (5.258291208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-797000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-797000
	* Restarting existing qemu2 VM for "NoKubernetes-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-797000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-797000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-797000 -n NoKubernetes-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-797000 -n NoKubernetes-797000: exit status 7 (72.740167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-797000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-797000 --driver=qemu2 : exit status 80 (6.804889417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-797000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-797000
	* Restarting existing qemu2 VM for "NoKubernetes-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-797000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-797000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-797000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-797000 -n NoKubernetes-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-797000 -n NoKubernetes-797000: exit status 7 (67.807875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-520000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-520000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.909920292s)

                                                
                                                
-- stdout --
	* [old-k8s-version-520000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-520000" primary control-plane node in "old-k8s-version-520000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-520000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:44:04.751399   10208 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:44:04.751540   10208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:04.751543   10208 out.go:358] Setting ErrFile to fd 2...
	I1209 03:44:04.751546   10208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:04.751702   10208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:44:04.752892   10208 out.go:352] Setting JSON to false
	I1209 03:44:04.770575   10208 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6215,"bootTime":1733738429,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:44:04.770647   10208 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:44:04.777691   10208 out.go:177] * [old-k8s-version-520000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:44:04.785660   10208 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:44:04.785708   10208 notify.go:220] Checking for updates...
	I1209 03:44:04.794481   10208 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:44:04.797676   10208 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:44:04.801393   10208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:44:04.806260   10208 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:44:04.810654   10208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:44:04.815063   10208 config.go:182] Loaded profile config "cert-expiration-299000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:44:04.815148   10208 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:44:04.815205   10208 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:44:04.818610   10208 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:44:04.825637   10208 start.go:297] selected driver: qemu2
	I1209 03:44:04.825645   10208 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:44:04.825660   10208 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:44:04.828207   10208 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:44:04.831553   10208 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:44:04.835714   10208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:44:04.835745   10208 cni.go:84] Creating CNI manager for ""
	I1209 03:44:04.835767   10208 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1209 03:44:04.835806   10208 start.go:340] cluster config:
	{Name:old-k8s-version-520000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:44:04.840448   10208 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:04.847590   10208 out.go:177] * Starting "old-k8s-version-520000" primary control-plane node in "old-k8s-version-520000" cluster
	I1209 03:44:04.851674   10208 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 03:44:04.851691   10208 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 03:44:04.851702   10208 cache.go:56] Caching tarball of preloaded images
	I1209 03:44:04.851780   10208 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:44:04.851792   10208 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1209 03:44:04.851860   10208 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/old-k8s-version-520000/config.json ...
	I1209 03:44:04.851870   10208 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/old-k8s-version-520000/config.json: {Name:mkc36907ef859b4f0f0a1fbde5c4ea0defde9fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:44:04.852276   10208 start.go:360] acquireMachinesLock for old-k8s-version-520000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:44:04.852322   10208 start.go:364] duration metric: took 40.5µs to acquireMachinesLock for "old-k8s-version-520000"
	I1209 03:44:04.852333   10208 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:44:04.852363   10208 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:44:04.860618   10208 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:44:04.877101   10208 start.go:159] libmachine.API.Create for "old-k8s-version-520000" (driver="qemu2")
	I1209 03:44:04.877126   10208 client.go:168] LocalClient.Create starting
	I1209 03:44:04.877192   10208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:44:04.877235   10208 main.go:141] libmachine: Decoding PEM data...
	I1209 03:44:04.877246   10208 main.go:141] libmachine: Parsing certificate...
	I1209 03:44:04.877284   10208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:44:04.877317   10208 main.go:141] libmachine: Decoding PEM data...
	I1209 03:44:04.877324   10208 main.go:141] libmachine: Parsing certificate...
	I1209 03:44:04.877878   10208 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:44:05.040210   10208 main.go:141] libmachine: Creating SSH key...
	I1209 03:44:05.171032   10208 main.go:141] libmachine: Creating Disk image...
	I1209 03:44:05.171039   10208 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:44:05.171282   10208 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2
	I1209 03:44:05.181396   10208 main.go:141] libmachine: STDOUT: 
	I1209 03:44:05.181412   10208 main.go:141] libmachine: STDERR: 
	I1209 03:44:05.181477   10208 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2 +20000M
	I1209 03:44:05.190028   10208 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:44:05.190041   10208 main.go:141] libmachine: STDERR: 
	I1209 03:44:05.190060   10208 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2
	I1209 03:44:05.190065   10208 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:44:05.190077   10208 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:44:05.190106   10208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:13:34:81:1c:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2
	I1209 03:44:05.191872   10208 main.go:141] libmachine: STDOUT: 
	I1209 03:44:05.191892   10208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:44:05.191916   10208 client.go:171] duration metric: took 314.7895ms to LocalClient.Create
	I1209 03:44:07.194056   10208 start.go:128] duration metric: took 2.341715792s to createHost
	I1209 03:44:07.194278   10208 start.go:83] releasing machines lock for "old-k8s-version-520000", held for 2.34194575s
	W1209 03:44:07.194354   10208 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:07.209629   10208 out.go:177] * Deleting "old-k8s-version-520000" in qemu2 ...
	W1209 03:44:07.238732   10208 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:07.238764   10208 start.go:729] Will try again in 5 seconds ...
	I1209 03:44:12.240964   10208 start.go:360] acquireMachinesLock for old-k8s-version-520000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:44:12.241468   10208 start.go:364] duration metric: took 424.292µs to acquireMachinesLock for "old-k8s-version-520000"
	I1209 03:44:12.241615   10208 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:44:12.241905   10208 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:44:12.259781   10208 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:44:12.308412   10208 start.go:159] libmachine.API.Create for "old-k8s-version-520000" (driver="qemu2")
	I1209 03:44:12.308468   10208 client.go:168] LocalClient.Create starting
	I1209 03:44:12.308586   10208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:44:12.308670   10208 main.go:141] libmachine: Decoding PEM data...
	I1209 03:44:12.308687   10208 main.go:141] libmachine: Parsing certificate...
	I1209 03:44:12.308743   10208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:44:12.308800   10208 main.go:141] libmachine: Decoding PEM data...
	I1209 03:44:12.308818   10208 main.go:141] libmachine: Parsing certificate...
	I1209 03:44:12.309415   10208 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:44:12.481783   10208 main.go:141] libmachine: Creating SSH key...
	I1209 03:44:12.560805   10208 main.go:141] libmachine: Creating Disk image...
	I1209 03:44:12.560810   10208 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:44:12.561051   10208 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2
	I1209 03:44:12.570998   10208 main.go:141] libmachine: STDOUT: 
	I1209 03:44:12.571013   10208 main.go:141] libmachine: STDERR: 
	I1209 03:44:12.571080   10208 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2 +20000M
	I1209 03:44:12.579531   10208 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:44:12.579558   10208 main.go:141] libmachine: STDERR: 
	I1209 03:44:12.579572   10208 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2
	I1209 03:44:12.579577   10208 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:44:12.579583   10208 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:44:12.579618   10208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:f8:02:e8:08:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2
	I1209 03:44:12.581397   10208 main.go:141] libmachine: STDOUT: 
	I1209 03:44:12.581411   10208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:44:12.581425   10208 client.go:171] duration metric: took 272.95725ms to LocalClient.Create
	I1209 03:44:14.583571   10208 start.go:128] duration metric: took 2.341674s to createHost
	I1209 03:44:14.583646   10208 start.go:83] releasing machines lock for "old-k8s-version-520000", held for 2.342197042s
	W1209 03:44:14.584064   10208 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-520000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-520000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:14.597964   10208 out.go:201] 
	W1209 03:44:14.602041   10208 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:44:14.602118   10208 out.go:270] * 
	* 
	W1209 03:44:14.604805   10208 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:44:14.615871   10208 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-520000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000: exit status 7 (71.237208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-520000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-520000 create -f testdata/busybox.yaml: exit status 1 (29.245792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-520000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-520000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000: exit status 7 (33.645792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-520000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000: exit status 7 (33.396583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-520000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-520000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-520000 describe deploy/metrics-server -n kube-system: exit status 1 (27.717792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-520000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-520000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000: exit status 7 (34.291541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-520000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-520000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.199712209s)

                                                
                                                
-- stdout --
	* [old-k8s-version-520000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-520000" primary control-plane node in "old-k8s-version-520000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-520000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-520000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:44:18.430120   10256 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:44:18.430286   10256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:18.430288   10256 out.go:358] Setting ErrFile to fd 2...
	I1209 03:44:18.430290   10256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:18.430432   10256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:44:18.431530   10256 out.go:352] Setting JSON to false
	I1209 03:44:18.449249   10256 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6229,"bootTime":1733738429,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:44:18.449347   10256 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:44:18.454082   10256 out.go:177] * [old-k8s-version-520000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:44:18.461129   10256 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:44:18.461159   10256 notify.go:220] Checking for updates...
	I1209 03:44:18.469060   10256 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:44:18.472077   10256 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:44:18.475083   10256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:44:18.478080   10256 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:44:18.481096   10256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:44:18.484330   10256 config.go:182] Loaded profile config "old-k8s-version-520000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1209 03:44:18.488059   10256 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 03:44:18.491083   10256 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:44:18.495071   10256 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:44:18.502107   10256 start.go:297] selected driver: qemu2
	I1209 03:44:18.502114   10256 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:44:18.502187   10256 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:44:18.504765   10256 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:44:18.504791   10256 cni.go:84] Creating CNI manager for ""
	I1209 03:44:18.504818   10256 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1209 03:44:18.504838   10256 start.go:340] cluster config:
	{Name:old-k8s-version-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-520000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:44:18.509423   10256 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:18.518087   10256 out.go:177] * Starting "old-k8s-version-520000" primary control-plane node in "old-k8s-version-520000" cluster
	I1209 03:44:18.521961   10256 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 03:44:18.521976   10256 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 03:44:18.521988   10256 cache.go:56] Caching tarball of preloaded images
	I1209 03:44:18.522068   10256 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:44:18.522074   10256 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1209 03:44:18.522128   10256 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/old-k8s-version-520000/config.json ...
	I1209 03:44:18.522638   10256 start.go:360] acquireMachinesLock for old-k8s-version-520000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:44:18.522670   10256 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "old-k8s-version-520000"
	I1209 03:44:18.522679   10256 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:44:18.522685   10256 fix.go:54] fixHost starting: 
	I1209 03:44:18.522804   10256 fix.go:112] recreateIfNeeded on old-k8s-version-520000: state=Stopped err=<nil>
	W1209 03:44:18.522813   10256 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:44:18.527115   10256 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-520000" ...
	I1209 03:44:18.534054   10256 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:44:18.534099   10256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:f8:02:e8:08:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2
	I1209 03:44:18.536353   10256 main.go:141] libmachine: STDOUT: 
	I1209 03:44:18.536374   10256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:44:18.536404   10256 fix.go:56] duration metric: took 13.719125ms for fixHost
	I1209 03:44:18.536408   10256 start.go:83] releasing machines lock for "old-k8s-version-520000", held for 13.733416ms
	W1209 03:44:18.536414   10256 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:44:18.536473   10256 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:18.536478   10256 start.go:729] Will try again in 5 seconds ...
	I1209 03:44:23.538595   10256 start.go:360] acquireMachinesLock for old-k8s-version-520000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:44:23.539063   10256 start.go:364] duration metric: took 348.584µs to acquireMachinesLock for "old-k8s-version-520000"
	I1209 03:44:23.539209   10256 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:44:23.539229   10256 fix.go:54] fixHost starting: 
	I1209 03:44:23.539895   10256 fix.go:112] recreateIfNeeded on old-k8s-version-520000: state=Stopped err=<nil>
	W1209 03:44:23.539922   10256 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:44:23.548461   10256 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-520000" ...
	I1209 03:44:23.552431   10256 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:44:23.552674   10256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:f8:02:e8:08:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/old-k8s-version-520000/disk.qcow2
	I1209 03:44:23.562431   10256 main.go:141] libmachine: STDOUT: 
	I1209 03:44:23.562525   10256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:44:23.562615   10256 fix.go:56] duration metric: took 23.383ms for fixHost
	I1209 03:44:23.562635   10256 start.go:83] releasing machines lock for "old-k8s-version-520000", held for 23.5355ms
	W1209 03:44:23.562866   10256 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-520000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-520000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:23.570463   10256 out.go:201] 
	W1209 03:44:23.574592   10256 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:44:23.574653   10256 out.go:270] * 
	* 
	W1209 03:44:23.577128   10256 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:44:23.584522   10256 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-520000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000: exit status 7 (71.276167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-520000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000: exit status 7 (35.789625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-520000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-520000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-520000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.411708ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-520000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-520000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000: exit status 7 (34.237792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-520000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000: exit status 7 (33.963584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-520000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-520000 --alsologtostderr -v=1: exit status 83 (45.3165ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-520000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-520000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:44:23.876422   10275 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:44:23.876858   10275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:23.876861   10275 out.go:358] Setting ErrFile to fd 2...
	I1209 03:44:23.876864   10275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:23.877016   10275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:44:23.877232   10275 out.go:352] Setting JSON to false
	I1209 03:44:23.877238   10275 mustload.go:65] Loading cluster: old-k8s-version-520000
	I1209 03:44:23.877455   10275 config.go:182] Loaded profile config "old-k8s-version-520000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1209 03:44:23.881425   10275 out.go:177] * The control-plane node old-k8s-version-520000 host is not running: state=Stopped
	I1209 03:44:23.885425   10275 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-520000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-520000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000: exit status 7 (33.794417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-520000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000: exit status 7 (34.275709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-467000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-467000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.845303167s)

                                                
                                                
-- stdout --
	* [no-preload-467000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-467000" primary control-plane node in "no-preload-467000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-467000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:44:24.221443   10292 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:44:24.221593   10292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:24.221599   10292 out.go:358] Setting ErrFile to fd 2...
	I1209 03:44:24.221602   10292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:24.221757   10292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:44:24.222926   10292 out.go:352] Setting JSON to false
	I1209 03:44:24.240606   10292 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6235,"bootTime":1733738429,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:44:24.240671   10292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:44:24.245481   10292 out.go:177] * [no-preload-467000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:44:24.252363   10292 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:44:24.252434   10292 notify.go:220] Checking for updates...
	I1209 03:44:24.259407   10292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:44:24.262377   10292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:44:24.265398   10292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:44:24.268485   10292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:44:24.271407   10292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:44:24.274831   10292 config.go:182] Loaded profile config "cert-expiration-299000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:44:24.274899   10292 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:44:24.274952   10292 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:44:24.279465   10292 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:44:24.286417   10292 start.go:297] selected driver: qemu2
	I1209 03:44:24.286423   10292 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:44:24.286430   10292 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:44:24.289015   10292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:44:24.293502   10292 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:44:24.296472   10292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:44:24.296495   10292 cni.go:84] Creating CNI manager for ""
	I1209 03:44:24.296520   10292 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:44:24.296525   10292 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:44:24.296564   10292 start.go:340] cluster config:
	{Name:no-preload-467000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-467000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:44:24.301384   10292 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:24.308442   10292 out.go:177] * Starting "no-preload-467000" primary control-plane node in "no-preload-467000" cluster
	I1209 03:44:24.312395   10292 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:44:24.312494   10292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/no-preload-467000/config.json ...
	I1209 03:44:24.312516   10292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/no-preload-467000/config.json: {Name:mk21c94ad4954ba3845a078fbd74c321c9026f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:44:24.312531   10292 cache.go:107] acquiring lock: {Name:mkf0ddcf765528f2b9e7d6371fc550b01145cef4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:24.312539   10292 cache.go:107] acquiring lock: {Name:mk8587405e98dd622ac9aacbbb5dd1849c010caa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:24.312533   10292 cache.go:107] acquiring lock: {Name:mkc5bd8c992b6d32c51edc951b61806522b5d8fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:24.312553   10292 cache.go:107] acquiring lock: {Name:mkff5c9b0f232a193e81a2eebf2424f619737476 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:24.312568   10292 cache.go:107] acquiring lock: {Name:mk4ab9cb5dbdc673ab2ea9fb5c4eb0f2d132847a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:24.312590   10292 cache.go:107] acquiring lock: {Name:mk7ceef1333c8bceb7420f7fd8618a471bb3dd70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:24.312651   10292 cache.go:107] acquiring lock: {Name:mkdde34535f3c997674ea218a35af4345510b147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:24.312739   10292 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 03:44:24.312922   10292 cache.go:107] acquiring lock: {Name:mk4e2732dd95f5fa832195aa3eb4a271dfd75a57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:24.312953   10292 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 03:44:24.313015   10292 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 03:44:24.313102   10292 start.go:360] acquireMachinesLock for no-preload-467000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:44:24.313123   10292 cache.go:115] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1209 03:44:24.313138   10292 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 610.25µs
	I1209 03:44:24.313172   10292 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 03:44:24.313174   10292 start.go:364] duration metric: took 65.125µs to acquireMachinesLock for "no-preload-467000"
	I1209 03:44:24.313186   10292 start.go:93] Provisioning new machine with config: &{Name:no-preload-467000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-467000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:44:24.313218   10292 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:44:24.313285   10292 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 03:44:24.313312   10292 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 03:44:24.313311   10292 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 03:44:24.313393   10292 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1209 03:44:24.317492   10292 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:44:24.324691   10292 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 03:44:24.324789   10292 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 03:44:24.325275   10292 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 03:44:24.327358   10292 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 03:44:24.327375   10292 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 03:44:24.327407   10292 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 03:44:24.327431   10292 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 03:44:24.336568   10292 start.go:159] libmachine.API.Create for "no-preload-467000" (driver="qemu2")
	I1209 03:44:24.336590   10292 client.go:168] LocalClient.Create starting
	I1209 03:44:24.336670   10292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:44:24.336710   10292 main.go:141] libmachine: Decoding PEM data...
	I1209 03:44:24.336720   10292 main.go:141] libmachine: Parsing certificate...
	I1209 03:44:24.336756   10292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:44:24.336785   10292 main.go:141] libmachine: Decoding PEM data...
	I1209 03:44:24.336792   10292 main.go:141] libmachine: Parsing certificate...
	I1209 03:44:24.337147   10292 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:44:24.503529   10292 main.go:141] libmachine: Creating SSH key...
	I1209 03:44:24.551211   10292 main.go:141] libmachine: Creating Disk image...
	I1209 03:44:24.551228   10292 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:44:24.551493   10292 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2
	I1209 03:44:24.561006   10292 main.go:141] libmachine: STDOUT: 
	I1209 03:44:24.561019   10292 main.go:141] libmachine: STDERR: 
	I1209 03:44:24.561072   10292 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2 +20000M
	I1209 03:44:24.570381   10292 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:44:24.570415   10292 main.go:141] libmachine: STDERR: 
	I1209 03:44:24.570435   10292 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2
	I1209 03:44:24.570442   10292 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:44:24.570459   10292 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:44:24.570496   10292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:d5:95:03:ea:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2
	I1209 03:44:24.572768   10292 main.go:141] libmachine: STDOUT: 
	I1209 03:44:24.572783   10292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:44:24.572804   10292 client.go:171] duration metric: took 236.213ms to LocalClient.Create
	I1209 03:44:24.751937   10292 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 03:44:24.793166   10292 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1209 03:44:24.815051   10292 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1209 03:44:24.910190   10292 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 03:44:24.950968   10292 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 03:44:24.960699   10292 cache.go:157] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1209 03:44:24.960716   10292 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 648.159542ms
	I1209 03:44:24.960725   10292 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1209 03:44:25.006264   10292 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 03:44:25.090081   10292 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 03:44:26.573017   10292 start.go:128] duration metric: took 2.25979725s to createHost
	I1209 03:44:26.573077   10292 start.go:83] releasing machines lock for "no-preload-467000", held for 2.259936375s
	W1209 03:44:26.573134   10292 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:26.592248   10292 out.go:177] * Deleting "no-preload-467000" in qemu2 ...
	W1209 03:44:26.628402   10292 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:26.628440   10292 start.go:729] Will try again in 5 seconds ...
	I1209 03:44:28.245576   10292 cache.go:157] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1209 03:44:28.245637   10292 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.932782833s
	I1209 03:44:28.245715   10292 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1209 03:44:29.308919   10292 cache.go:157] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1209 03:44:29.308972   10292 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 4.99645725s
	I1209 03:44:29.308998   10292 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1209 03:44:29.585186   10292 cache.go:157] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1209 03:44:29.585239   10292 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 5.2727795s
	I1209 03:44:29.585285   10292 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1209 03:44:29.692258   10292 cache.go:157] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1209 03:44:29.692322   10292 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 5.379884s
	I1209 03:44:29.692355   10292 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1209 03:44:29.995871   10292 cache.go:157] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1209 03:44:29.995933   10292 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 5.68350675s
	I1209 03:44:29.995987   10292 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1209 03:44:31.628655   10292 start.go:360] acquireMachinesLock for no-preload-467000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:44:31.629192   10292 start.go:364] duration metric: took 457.125µs to acquireMachinesLock for "no-preload-467000"
	I1209 03:44:31.629312   10292 start.go:93] Provisioning new machine with config: &{Name:no-preload-467000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-467000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:44:31.629521   10292 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:44:31.649351   10292 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:44:31.698764   10292 start.go:159] libmachine.API.Create for "no-preload-467000" (driver="qemu2")
	I1209 03:44:31.698804   10292 client.go:168] LocalClient.Create starting
	I1209 03:44:31.698929   10292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:44:31.699012   10292 main.go:141] libmachine: Decoding PEM data...
	I1209 03:44:31.699029   10292 main.go:141] libmachine: Parsing certificate...
	I1209 03:44:31.699095   10292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:44:31.699150   10292 main.go:141] libmachine: Decoding PEM data...
	I1209 03:44:31.699167   10292 main.go:141] libmachine: Parsing certificate...
	I1209 03:44:31.699767   10292 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:44:31.871773   10292 main.go:141] libmachine: Creating SSH key...
	I1209 03:44:31.971936   10292 main.go:141] libmachine: Creating Disk image...
	I1209 03:44:31.971942   10292 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:44:31.972196   10292 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2
	I1209 03:44:31.982513   10292 main.go:141] libmachine: STDOUT: 
	I1209 03:44:31.982534   10292 main.go:141] libmachine: STDERR: 
	I1209 03:44:31.982597   10292 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2 +20000M
	I1209 03:44:31.991195   10292 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:44:31.991210   10292 main.go:141] libmachine: STDERR: 
	I1209 03:44:31.991219   10292 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2
	I1209 03:44:31.991224   10292 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:44:31.991234   10292 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:44:31.991276   10292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:b7:33:82:e8:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2
	I1209 03:44:31.993231   10292 main.go:141] libmachine: STDOUT: 
	I1209 03:44:31.993246   10292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:44:31.993259   10292 client.go:171] duration metric: took 294.456ms to LocalClient.Create
	I1209 03:44:33.994034   10292 start.go:128] duration metric: took 2.3644845s to createHost
	I1209 03:44:33.994105   10292 start.go:83] releasing machines lock for "no-preload-467000", held for 2.36490625s
	W1209 03:44:33.994436   10292 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-467000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-467000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:34.005955   10292 out.go:201] 
	W1209 03:44:34.009916   10292 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:44:34.009951   10292 out.go:270] * 
	* 
	W1209 03:44:34.012544   10292 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:44:34.021851   10292 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-467000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000: exit status 7 (74.861959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-467000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-467000 create -f testdata/busybox.yaml: exit status 1 (29.661ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-467000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-467000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000: exit status 7 (33.890792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-467000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000: exit status 7 (33.499083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-467000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-467000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-467000 describe deploy/metrics-server -n kube-system: exit status 1 (27.076541ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-467000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-467000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000: exit status 7 (34.21025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-467000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-467000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.192098417s)

                                                
                                                
-- stdout --
	* [no-preload-467000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-467000" primary control-plane node in "no-preload-467000" cluster
	* Restarting existing qemu2 VM for "no-preload-467000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-467000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:44:37.490335   10380 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:44:37.490501   10380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:37.490504   10380 out.go:358] Setting ErrFile to fd 2...
	I1209 03:44:37.490506   10380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:37.490637   10380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:44:37.491642   10380 out.go:352] Setting JSON to false
	I1209 03:44:37.509576   10380 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6248,"bootTime":1733738429,"procs":553,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:44:37.509640   10380 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:44:37.514329   10380 out.go:177] * [no-preload-467000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:44:37.521361   10380 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:44:37.521429   10380 notify.go:220] Checking for updates...
	I1209 03:44:37.528317   10380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:44:37.531301   10380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:44:37.534303   10380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:44:37.537318   10380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:44:37.540333   10380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:44:37.542067   10380 config.go:182] Loaded profile config "no-preload-467000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:44:37.542331   10380 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:44:37.546228   10380 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:44:37.553138   10380 start.go:297] selected driver: qemu2
	I1209 03:44:37.553145   10380 start.go:901] validating driver "qemu2" against &{Name:no-preload-467000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:no-preload-467000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:44:37.553208   10380 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:44:37.555617   10380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:44:37.555643   10380 cni.go:84] Creating CNI manager for ""
	I1209 03:44:37.555667   10380 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:44:37.555689   10380 start.go:340] cluster config:
	{Name:no-preload-467000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-467000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:44:37.559886   10380 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:37.568262   10380 out.go:177] * Starting "no-preload-467000" primary control-plane node in "no-preload-467000" cluster
	I1209 03:44:37.572250   10380 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:44:37.572330   10380 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/no-preload-467000/config.json ...
	I1209 03:44:37.572351   10380 cache.go:107] acquiring lock: {Name:mkdde34535f3c997674ea218a35af4345510b147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:37.572351   10380 cache.go:107] acquiring lock: {Name:mkf0ddcf765528f2b9e7d6371fc550b01145cef4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:37.572360   10380 cache.go:107] acquiring lock: {Name:mk8587405e98dd622ac9aacbbb5dd1849c010caa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:37.572382   10380 cache.go:107] acquiring lock: {Name:mk4ab9cb5dbdc673ab2ea9fb5c4eb0f2d132847a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:37.572450   10380 cache.go:115] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1209 03:44:37.572459   10380 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 108.917µs
	I1209 03:44:37.572466   10380 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1209 03:44:37.572466   10380 cache.go:115] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1209 03:44:37.572476   10380 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 128.458µs
	I1209 03:44:37.572473   10380 cache.go:107] acquiring lock: {Name:mk7ceef1333c8bceb7420f7fd8618a471bb3dd70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:37.572469   10380 cache.go:115] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1209 03:44:37.572483   10380 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 101.959µs
	I1209 03:44:37.572491   10380 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1209 03:44:37.572484   10380 cache.go:107] acquiring lock: {Name:mkc5bd8c992b6d32c51edc951b61806522b5d8fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:37.572479   10380 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1209 03:44:37.572475   10380 cache.go:107] acquiring lock: {Name:mkff5c9b0f232a193e81a2eebf2424f619737476 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:37.572551   10380 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 03:44:37.572533   10380 cache.go:107] acquiring lock: {Name:mk4e2732dd95f5fa832195aa3eb4a271dfd75a57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:37.572578   10380 cache.go:115] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1209 03:44:37.572581   10380 cache.go:115] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1209 03:44:37.572583   10380 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 99.416µs
	I1209 03:44:37.572587   10380 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1209 03:44:37.572586   10380 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 111.666µs
	I1209 03:44:37.572586   10380 cache.go:115] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1209 03:44:37.572590   10380 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1209 03:44:37.572595   10380 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 248.417µs
	I1209 03:44:37.572602   10380 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1209 03:44:37.572737   10380 cache.go:115] /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1209 03:44:37.572743   10380 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 258.292µs
	I1209 03:44:37.572748   10380 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1209 03:44:37.572897   10380 start.go:360] acquireMachinesLock for no-preload-467000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:44:37.572937   10380 start.go:364] duration metric: took 33.208µs to acquireMachinesLock for "no-preload-467000"
	I1209 03:44:37.572946   10380 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:44:37.572951   10380 fix.go:54] fixHost starting: 
	I1209 03:44:37.573076   10380 fix.go:112] recreateIfNeeded on no-preload-467000: state=Stopped err=<nil>
	W1209 03:44:37.573082   10380 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:44:37.581254   10380 out.go:177] * Restarting existing qemu2 VM for "no-preload-467000" ...
	I1209 03:44:37.585256   10380 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:44:37.585293   10380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:b7:33:82:e8:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2
	I1209 03:44:37.585797   10380 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 03:44:37.587759   10380 main.go:141] libmachine: STDOUT: 
	I1209 03:44:37.587785   10380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:44:37.587812   10380 fix.go:56] duration metric: took 14.860792ms for fixHost
	I1209 03:44:37.587816   10380 start.go:83] releasing machines lock for "no-preload-467000", held for 14.874833ms
	W1209 03:44:37.587823   10380 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:44:37.587876   10380 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:37.587881   10380 start.go:729] Will try again in 5 seconds ...
	I1209 03:44:38.034337   10380 cache.go:162] opening:  /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1209 03:44:42.588165   10380 start.go:360] acquireMachinesLock for no-preload-467000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:44:42.588557   10380 start.go:364] duration metric: took 312.584µs to acquireMachinesLock for "no-preload-467000"
	I1209 03:44:42.588683   10380 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:44:42.588703   10380 fix.go:54] fixHost starting: 
	I1209 03:44:42.589371   10380 fix.go:112] recreateIfNeeded on no-preload-467000: state=Stopped err=<nil>
	W1209 03:44:42.589397   10380 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:44:42.593947   10380 out.go:177] * Restarting existing qemu2 VM for "no-preload-467000" ...
	I1209 03:44:42.602776   10380 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:44:42.603025   10380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:b7:33:82:e8:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/no-preload-467000/disk.qcow2
	I1209 03:44:42.614264   10380 main.go:141] libmachine: STDOUT: 
	I1209 03:44:42.614323   10380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:44:42.614422   10380 fix.go:56] duration metric: took 25.719125ms for fixHost
	I1209 03:44:42.614442   10380 start.go:83] releasing machines lock for "no-preload-467000", held for 25.862ms
	W1209 03:44:42.614626   10380 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-467000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-467000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:42.620903   10380 out.go:201] 
	W1209 03:44:42.623985   10380 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:44:42.624015   10380 out.go:270] * 
	* 
	W1209 03:44:42.626584   10380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:44:42.635925   10380 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-467000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000: exit status 7 (72.488042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-467000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000: exit status 7 (35.969667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-467000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-467000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-467000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.518042ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-467000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-467000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000: exit status 7 (34.217667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-467000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000: exit status 7 (33.947875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-467000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-467000 --alsologtostderr -v=1: exit status 83 (44.503041ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-467000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-467000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:44:42.932215   10406 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:44:42.932407   10406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:42.932410   10406 out.go:358] Setting ErrFile to fd 2...
	I1209 03:44:42.932412   10406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:42.932544   10406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:44:42.932789   10406 out.go:352] Setting JSON to false
	I1209 03:44:42.932796   10406 mustload.go:65] Loading cluster: no-preload-467000
	I1209 03:44:42.933030   10406 config.go:182] Loaded profile config "no-preload-467000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:44:42.937023   10406 out.go:177] * The control-plane node no-preload-467000 host is not running: state=Stopped
	I1209 03:44:42.940775   10406 out.go:177]   To start a cluster, run: "minikube start -p no-preload-467000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-467000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000: exit status 7 (33.797625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-467000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000: exit status 7 (34.069625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-015000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-015000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.945302666s)

                                                
                                                
-- stdout --
	* [embed-certs-015000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-015000" primary control-plane node in "embed-certs-015000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-015000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:44:43.280611   10423 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:44:43.280761   10423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:43.280764   10423 out.go:358] Setting ErrFile to fd 2...
	I1209 03:44:43.280767   10423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:43.280899   10423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:44:43.282018   10423 out.go:352] Setting JSON to false
	I1209 03:44:43.299803   10423 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6254,"bootTime":1733738429,"procs":554,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:44:43.299885   10423 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:44:43.304950   10423 out.go:177] * [embed-certs-015000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:44:43.312852   10423 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:44:43.312908   10423 notify.go:220] Checking for updates...
	I1209 03:44:43.318416   10423 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:44:43.321839   10423 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:44:43.324883   10423 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:44:43.327869   10423 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:44:43.330817   10423 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:44:43.334191   10423 config.go:182] Loaded profile config "cert-expiration-299000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:44:43.334257   10423 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:44:43.334312   10423 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:44:43.338820   10423 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:44:43.345848   10423 start.go:297] selected driver: qemu2
	I1209 03:44:43.345855   10423 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:44:43.345863   10423 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:44:43.348423   10423 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:44:43.352823   10423 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:44:43.355904   10423 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:44:43.355919   10423 cni.go:84] Creating CNI manager for ""
	I1209 03:44:43.355940   10423 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:44:43.355946   10423 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:44:43.355970   10423 start.go:340] cluster config:
	{Name:embed-certs-015000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:44:43.360677   10423 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:43.368827   10423 out.go:177] * Starting "embed-certs-015000" primary control-plane node in "embed-certs-015000" cluster
	I1209 03:44:43.372818   10423 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:44:43.372836   10423 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:44:43.372849   10423 cache.go:56] Caching tarball of preloaded images
	I1209 03:44:43.372930   10423 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:44:43.372936   10423 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:44:43.373015   10423 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/embed-certs-015000/config.json ...
	I1209 03:44:43.373027   10423 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/embed-certs-015000/config.json: {Name:mk2f99010f4a939fbd90b75e3a5824e42dc97406 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:44:43.373520   10423 start.go:360] acquireMachinesLock for embed-certs-015000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:44:43.373573   10423 start.go:364] duration metric: took 46.292µs to acquireMachinesLock for "embed-certs-015000"
	I1209 03:44:43.373586   10423 start.go:93] Provisioning new machine with config: &{Name:embed-certs-015000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:44:43.373620   10423 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:44:43.381851   10423 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:44:43.400094   10423 start.go:159] libmachine.API.Create for "embed-certs-015000" (driver="qemu2")
	I1209 03:44:43.400134   10423 client.go:168] LocalClient.Create starting
	I1209 03:44:43.400205   10423 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:44:43.400249   10423 main.go:141] libmachine: Decoding PEM data...
	I1209 03:44:43.400259   10423 main.go:141] libmachine: Parsing certificate...
	I1209 03:44:43.400298   10423 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:44:43.400329   10423 main.go:141] libmachine: Decoding PEM data...
	I1209 03:44:43.400339   10423 main.go:141] libmachine: Parsing certificate...
	I1209 03:44:43.400740   10423 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:44:43.564251   10423 main.go:141] libmachine: Creating SSH key...
	I1209 03:44:43.638488   10423 main.go:141] libmachine: Creating Disk image...
	I1209 03:44:43.638497   10423 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:44:43.638730   10423 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2
	I1209 03:44:43.648762   10423 main.go:141] libmachine: STDOUT: 
	I1209 03:44:43.648781   10423 main.go:141] libmachine: STDERR: 
	I1209 03:44:43.648853   10423 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2 +20000M
	I1209 03:44:43.657336   10423 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:44:43.657359   10423 main.go:141] libmachine: STDERR: 
	I1209 03:44:43.657375   10423 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2
	I1209 03:44:43.657380   10423 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:44:43.657394   10423 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:44:43.657428   10423 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:68:7c:cd:14:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2
	I1209 03:44:43.659345   10423 main.go:141] libmachine: STDOUT: 
	I1209 03:44:43.659359   10423 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:44:43.659378   10423 client.go:171] duration metric: took 259.242792ms to LocalClient.Create
	I1209 03:44:45.661537   10423 start.go:128] duration metric: took 2.287937583s to createHost
	I1209 03:44:45.661598   10423 start.go:83] releasing machines lock for "embed-certs-015000", held for 2.288057584s
	W1209 03:44:45.661652   10423 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:45.676868   10423 out.go:177] * Deleting "embed-certs-015000" in qemu2 ...
	W1209 03:44:45.706329   10423 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:45.706349   10423 start.go:729] Will try again in 5 seconds ...
	I1209 03:44:50.708617   10423 start.go:360] acquireMachinesLock for embed-certs-015000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:44:50.709315   10423 start.go:364] duration metric: took 586.625µs to acquireMachinesLock for "embed-certs-015000"
	I1209 03:44:50.709445   10423 start.go:93] Provisioning new machine with config: &{Name:embed-certs-015000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:44:50.709701   10423 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:44:50.728645   10423 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:44:50.776874   10423 start.go:159] libmachine.API.Create for "embed-certs-015000" (driver="qemu2")
	I1209 03:44:50.776921   10423 client.go:168] LocalClient.Create starting
	I1209 03:44:50.777062   10423 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:44:50.777143   10423 main.go:141] libmachine: Decoding PEM data...
	I1209 03:44:50.777159   10423 main.go:141] libmachine: Parsing certificate...
	I1209 03:44:50.777220   10423 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:44:50.777278   10423 main.go:141] libmachine: Decoding PEM data...
	I1209 03:44:50.777292   10423 main.go:141] libmachine: Parsing certificate...
	I1209 03:44:50.777887   10423 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:44:50.950801   10423 main.go:141] libmachine: Creating SSH key...
	I1209 03:44:51.118572   10423 main.go:141] libmachine: Creating Disk image...
	I1209 03:44:51.118583   10423 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:44:51.118845   10423 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2
	I1209 03:44:51.128901   10423 main.go:141] libmachine: STDOUT: 
	I1209 03:44:51.128917   10423 main.go:141] libmachine: STDERR: 
	I1209 03:44:51.128982   10423 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2 +20000M
	I1209 03:44:51.137438   10423 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:44:51.137451   10423 main.go:141] libmachine: STDERR: 
	I1209 03:44:51.137466   10423 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2
	I1209 03:44:51.137471   10423 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:44:51.137481   10423 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:44:51.137528   10423 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:28:62:f6:99:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2
	I1209 03:44:51.139378   10423 main.go:141] libmachine: STDOUT: 
	I1209 03:44:51.139399   10423 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:44:51.139411   10423 client.go:171] duration metric: took 362.490916ms to LocalClient.Create
	I1209 03:44:53.141547   10423 start.go:128] duration metric: took 2.431852125s to createHost
	I1209 03:44:53.141591   10423 start.go:83] releasing machines lock for "embed-certs-015000", held for 2.432298833s
	W1209 03:44:53.141986   10423 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:53.157670   10423 out.go:201] 
	W1209 03:44:53.162709   10423 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:44:53.162771   10423 out.go:270] * 
	* 
	W1209 03:44:53.165448   10423 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:44:53.179590   10423 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-015000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000: exit status 7 (73.141708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-015000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-015000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-015000 create -f testdata/busybox.yaml: exit status 1 (29.464959ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-015000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-015000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000: exit status 7 (33.946958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-015000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000: exit status 7 (34.008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-015000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-015000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-015000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-015000 describe deploy/metrics-server -n kube-system: exit status 1 (27.361ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-015000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-015000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000: exit status 7 (33.673458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-015000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-015000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-015000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.186985583s)

                                                
                                                
-- stdout --
	* [embed-certs-015000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-015000" primary control-plane node in "embed-certs-015000" cluster
	* Restarting existing qemu2 VM for "embed-certs-015000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-015000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:44:57.136765   10475 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:44:57.136943   10475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:57.136946   10475 out.go:358] Setting ErrFile to fd 2...
	I1209 03:44:57.136948   10475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:44:57.137079   10475 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:44:57.138141   10475 out.go:352] Setting JSON to false
	I1209 03:44:57.155826   10475 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6268,"bootTime":1733738429,"procs":551,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:44:57.155901   10475 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:44:57.160584   10475 out.go:177] * [embed-certs-015000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:44:57.166637   10475 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:44:57.166711   10475 notify.go:220] Checking for updates...
	I1209 03:44:57.172439   10475 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:44:57.175521   10475 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:44:57.178609   10475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:44:57.180091   10475 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:44:57.183530   10475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:44:57.186893   10475 config.go:182] Loaded profile config "embed-certs-015000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:44:57.187171   10475 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:44:57.188971   10475 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:44:57.196520   10475 start.go:297] selected driver: qemu2
	I1209 03:44:57.196528   10475 start.go:901] validating driver "qemu2" against &{Name:embed-certs-015000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:embed-certs-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:44:57.196590   10475 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:44:57.199131   10475 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:44:57.199155   10475 cni.go:84] Creating CNI manager for ""
	I1209 03:44:57.199177   10475 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:44:57.199207   10475 start.go:340] cluster config:
	{Name:embed-certs-015000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-015000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:44:57.203586   10475 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:44:57.211449   10475 out.go:177] * Starting "embed-certs-015000" primary control-plane node in "embed-certs-015000" cluster
	I1209 03:44:57.215514   10475 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:44:57.215531   10475 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:44:57.215543   10475 cache.go:56] Caching tarball of preloaded images
	I1209 03:44:57.215618   10475 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:44:57.215630   10475 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:44:57.215687   10475 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/embed-certs-015000/config.json ...
	I1209 03:44:57.216200   10475 start.go:360] acquireMachinesLock for embed-certs-015000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:44:57.216229   10475 start.go:364] duration metric: took 22.917µs to acquireMachinesLock for "embed-certs-015000"
	I1209 03:44:57.216238   10475 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:44:57.216243   10475 fix.go:54] fixHost starting: 
	I1209 03:44:57.216364   10475 fix.go:112] recreateIfNeeded on embed-certs-015000: state=Stopped err=<nil>
	W1209 03:44:57.216371   10475 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:44:57.224570   10475 out.go:177] * Restarting existing qemu2 VM for "embed-certs-015000" ...
	I1209 03:44:57.228531   10475 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:44:57.228582   10475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:28:62:f6:99:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2
	I1209 03:44:57.230839   10475 main.go:141] libmachine: STDOUT: 
	I1209 03:44:57.230858   10475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:44:57.230889   10475 fix.go:56] duration metric: took 14.645667ms for fixHost
	I1209 03:44:57.230894   10475 start.go:83] releasing machines lock for "embed-certs-015000", held for 14.66075ms
	W1209 03:44:57.230899   10475 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:44:57.230934   10475 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:44:57.230939   10475 start.go:729] Will try again in 5 seconds ...
	I1209 03:45:02.233087   10475 start.go:360] acquireMachinesLock for embed-certs-015000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:45:02.233484   10475 start.go:364] duration metric: took 310.333µs to acquireMachinesLock for "embed-certs-015000"
	I1209 03:45:02.233610   10475 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:45:02.233629   10475 fix.go:54] fixHost starting: 
	I1209 03:45:02.234352   10475 fix.go:112] recreateIfNeeded on embed-certs-015000: state=Stopped err=<nil>
	W1209 03:45:02.234378   10475 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:45:02.242809   10475 out.go:177] * Restarting existing qemu2 VM for "embed-certs-015000" ...
	I1209 03:45:02.246945   10475 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:45:02.247201   10475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:28:62:f6:99:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/embed-certs-015000/disk.qcow2
	I1209 03:45:02.256973   10475 main.go:141] libmachine: STDOUT: 
	I1209 03:45:02.257022   10475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:45:02.257081   10475 fix.go:56] duration metric: took 23.456792ms for fixHost
	I1209 03:45:02.257098   10475 start.go:83] releasing machines lock for "embed-certs-015000", held for 23.593208ms
	W1209 03:45:02.257319   10475 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-015000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-015000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:02.265614   10475 out.go:201] 
	W1209 03:45:02.268932   10475 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:45:02.268950   10475 out.go:270] * 
	* 
	W1209 03:45:02.271168   10475 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:45:02.279806   10475 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-015000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000: exit status 7 (73.045916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-015000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-015000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000: exit status 7 (36.025458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-015000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-015000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-015000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-015000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.194625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-015000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-015000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000: exit status 7 (34.158125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-015000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-015000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000: exit status 7 (33.856292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-015000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-015000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-015000 --alsologtostderr -v=1: exit status 83 (45.574458ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-015000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-015000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:45:02.572230   10494 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:45:02.572437   10494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:02.572440   10494 out.go:358] Setting ErrFile to fd 2...
	I1209 03:45:02.572442   10494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:02.572568   10494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:45:02.572799   10494 out.go:352] Setting JSON to false
	I1209 03:45:02.572809   10494 mustload.go:65] Loading cluster: embed-certs-015000
	I1209 03:45:02.573030   10494 config.go:182] Loaded profile config "embed-certs-015000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:45:02.577212   10494 out.go:177] * The control-plane node embed-certs-015000 host is not running: state=Stopped
	I1209 03:45:02.581261   10494 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-015000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-015000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000: exit status 7 (34.054709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-015000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000: exit status 7 (34.1155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-015000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-193000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-193000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.874757666s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-193000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-193000" primary control-plane node in "default-k8s-diff-port-193000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-193000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:45:03.033484   10518 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:45:03.033622   10518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:03.033626   10518 out.go:358] Setting ErrFile to fd 2...
	I1209 03:45:03.033629   10518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:03.033766   10518 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:45:03.034907   10518 out.go:352] Setting JSON to false
	I1209 03:45:03.052567   10518 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6274,"bootTime":1733738429,"procs":551,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:45:03.052645   10518 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:45:03.057306   10518 out.go:177] * [default-k8s-diff-port-193000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:45:03.064197   10518 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:45:03.064241   10518 notify.go:220] Checking for updates...
	I1209 03:45:03.072062   10518 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:45:03.075257   10518 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:45:03.078250   10518 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:45:03.081230   10518 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:45:03.084298   10518 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:45:03.087575   10518 config.go:182] Loaded profile config "cert-expiration-299000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:45:03.087639   10518 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:45:03.087685   10518 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:45:03.092181   10518 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:45:03.099253   10518 start.go:297] selected driver: qemu2
	I1209 03:45:03.099260   10518 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:45:03.099268   10518 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:45:03.101784   10518 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:45:03.106202   10518 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:45:03.109291   10518 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:45:03.109313   10518 cni.go:84] Creating CNI manager for ""
	I1209 03:45:03.109334   10518 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:45:03.109341   10518 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:45:03.109383   10518 start.go:340] cluster config:
	{Name:default-k8s-diff-port-193000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:45:03.114054   10518 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:45:03.122222   10518 out.go:177] * Starting "default-k8s-diff-port-193000" primary control-plane node in "default-k8s-diff-port-193000" cluster
	I1209 03:45:03.126295   10518 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:45:03.126314   10518 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:45:03.126324   10518 cache.go:56] Caching tarball of preloaded images
	I1209 03:45:03.126415   10518 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:45:03.126421   10518 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:45:03.126486   10518 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/default-k8s-diff-port-193000/config.json ...
	I1209 03:45:03.126498   10518 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/default-k8s-diff-port-193000/config.json: {Name:mkfd2b270ed12fb81f1e564f32e3bbc0109c2fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:45:03.126982   10518 start.go:360] acquireMachinesLock for default-k8s-diff-port-193000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:45:03.127037   10518 start.go:364] duration metric: took 46.667µs to acquireMachinesLock for "default-k8s-diff-port-193000"
	I1209 03:45:03.127050   10518 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:45:03.127076   10518 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:45:03.134196   10518 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:45:03.152389   10518 start.go:159] libmachine.API.Create for "default-k8s-diff-port-193000" (driver="qemu2")
	I1209 03:45:03.152417   10518 client.go:168] LocalClient.Create starting
	I1209 03:45:03.152507   10518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:45:03.152549   10518 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:03.152563   10518 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:03.152601   10518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:45:03.152631   10518 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:03.152641   10518 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:03.153128   10518 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:45:03.316276   10518 main.go:141] libmachine: Creating SSH key...
	I1209 03:45:03.417920   10518 main.go:141] libmachine: Creating Disk image...
	I1209 03:45:03.417926   10518 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:45:03.418153   10518 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2
	I1209 03:45:03.428185   10518 main.go:141] libmachine: STDOUT: 
	I1209 03:45:03.428212   10518 main.go:141] libmachine: STDERR: 
	I1209 03:45:03.428280   10518 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2 +20000M
	I1209 03:45:03.436787   10518 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:45:03.436801   10518 main.go:141] libmachine: STDERR: 
	I1209 03:45:03.436825   10518 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2
	I1209 03:45:03.436831   10518 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:45:03.436842   10518 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:45:03.436874   10518 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:75:a4:f7:20:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2
	I1209 03:45:03.438665   10518 main.go:141] libmachine: STDOUT: 
	I1209 03:45:03.438680   10518 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:45:03.438701   10518 client.go:171] duration metric: took 286.283292ms to LocalClient.Create
	I1209 03:45:05.440854   10518 start.go:128] duration metric: took 2.31380025s to createHost
	I1209 03:45:05.440920   10518 start.go:83] releasing machines lock for "default-k8s-diff-port-193000", held for 2.313914791s
	W1209 03:45:05.441034   10518 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:05.457140   10518 out.go:177] * Deleting "default-k8s-diff-port-193000" in qemu2 ...
	W1209 03:45:05.488342   10518 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:05.488364   10518 start.go:729] Will try again in 5 seconds ...
	I1209 03:45:10.490163   10518 start.go:360] acquireMachinesLock for default-k8s-diff-port-193000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:45:10.490573   10518 start.go:364] duration metric: took 309.458µs to acquireMachinesLock for "default-k8s-diff-port-193000"
	I1209 03:45:10.490657   10518 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:45:10.490849   10518 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:45:10.505466   10518 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:45:10.548136   10518 start.go:159] libmachine.API.Create for "default-k8s-diff-port-193000" (driver="qemu2")
	I1209 03:45:10.548200   10518 client.go:168] LocalClient.Create starting
	I1209 03:45:10.548343   10518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:45:10.548451   10518 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:10.548474   10518 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:10.548544   10518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:45:10.548608   10518 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:10.548626   10518 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:10.549277   10518 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:45:10.724559   10518 main.go:141] libmachine: Creating SSH key...
	I1209 03:45:10.805360   10518 main.go:141] libmachine: Creating Disk image...
	I1209 03:45:10.805366   10518 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:45:10.805589   10518 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2
	I1209 03:45:10.815250   10518 main.go:141] libmachine: STDOUT: 
	I1209 03:45:10.815272   10518 main.go:141] libmachine: STDERR: 
	I1209 03:45:10.815336   10518 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2 +20000M
	I1209 03:45:10.823805   10518 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:45:10.823824   10518 main.go:141] libmachine: STDERR: 
	I1209 03:45:10.823839   10518 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2
	I1209 03:45:10.823845   10518 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:45:10.823852   10518 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:45:10.823884   10518 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ec:56:64:9c:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2
	I1209 03:45:10.825636   10518 main.go:141] libmachine: STDOUT: 
	I1209 03:45:10.825648   10518 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:45:10.825662   10518 client.go:171] duration metric: took 277.46225ms to LocalClient.Create
	I1209 03:45:12.827809   10518 start.go:128] duration metric: took 2.336971583s to createHost
	I1209 03:45:12.827869   10518 start.go:83] releasing machines lock for "default-k8s-diff-port-193000", held for 2.337321333s
	W1209 03:45:12.828244   10518 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-193000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-193000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:12.843017   10518 out.go:201] 
	W1209 03:45:12.848440   10518 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:45:12.848491   10518 out.go:270] * 
	* 
	W1209 03:45:12.850891   10518 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:45:12.862038   10518 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-193000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000: exit status 7 (69.970083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-193000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-193000 create -f testdata/busybox.yaml: exit status 1 (29.314667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-193000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-193000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000: exit status 7 (33.969375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-193000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000: exit status 7 (33.208292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-193000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-193000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-193000 describe deploy/metrics-server -n kube-system: exit status 1 (27.618583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-193000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-193000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000: exit status 7 (34.062625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-193000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-193000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.193367375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-193000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-193000" primary control-plane node in "default-k8s-diff-port-193000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-193000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-193000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:45:16.865008   10568 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:45:16.865176   10568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:16.865179   10568 out.go:358] Setting ErrFile to fd 2...
	I1209 03:45:16.865181   10568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:16.865298   10568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:45:16.866402   10568 out.go:352] Setting JSON to false
	I1209 03:45:16.884054   10568 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6287,"bootTime":1733738429,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:45:16.884132   10568 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:45:16.889193   10568 out.go:177] * [default-k8s-diff-port-193000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:45:16.896205   10568 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:45:16.896240   10568 notify.go:220] Checking for updates...
	I1209 03:45:16.904183   10568 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:45:16.907106   10568 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:45:16.910161   10568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:45:16.913179   10568 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:45:16.916083   10568 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:45:16.919371   10568 config.go:182] Loaded profile config "default-k8s-diff-port-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:45:16.919653   10568 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:45:16.923122   10568 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:45:16.930139   10568 start.go:297] selected driver: qemu2
	I1209 03:45:16.930145   10568 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:45:16.930194   10568 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:45:16.932829   10568 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:45:16.932857   10568 cni.go:84] Creating CNI manager for ""
	I1209 03:45:16.932890   10568 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:45:16.932915   10568 start.go:340] cluster config:
	{Name:default-k8s-diff-port-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-193000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:45:16.937404   10568 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:45:16.945108   10568 out.go:177] * Starting "default-k8s-diff-port-193000" primary control-plane node in "default-k8s-diff-port-193000" cluster
	I1209 03:45:16.948174   10568 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:45:16.948187   10568 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:45:16.948194   10568 cache.go:56] Caching tarball of preloaded images
	I1209 03:45:16.948253   10568 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:45:16.948258   10568 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:45:16.948312   10568 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/default-k8s-diff-port-193000/config.json ...
	I1209 03:45:16.948790   10568 start.go:360] acquireMachinesLock for default-k8s-diff-port-193000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:45:16.948822   10568 start.go:364] duration metric: took 25.417µs to acquireMachinesLock for "default-k8s-diff-port-193000"
	I1209 03:45:16.948830   10568 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:45:16.948836   10568 fix.go:54] fixHost starting: 
	I1209 03:45:16.948951   10568 fix.go:112] recreateIfNeeded on default-k8s-diff-port-193000: state=Stopped err=<nil>
	W1209 03:45:16.948958   10568 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:45:16.953144   10568 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-193000" ...
	I1209 03:45:16.960186   10568 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:45:16.960249   10568 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ec:56:64:9c:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2
	I1209 03:45:16.962341   10568 main.go:141] libmachine: STDOUT: 
	I1209 03:45:16.962366   10568 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:45:16.962395   10568 fix.go:56] duration metric: took 13.559708ms for fixHost
	I1209 03:45:16.962401   10568 start.go:83] releasing machines lock for "default-k8s-diff-port-193000", held for 13.574875ms
	W1209 03:45:16.962407   10568 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:45:16.962439   10568 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:16.962444   10568 start.go:729] Will try again in 5 seconds ...
	I1209 03:45:21.964654   10568 start.go:360] acquireMachinesLock for default-k8s-diff-port-193000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:45:21.965123   10568 start.go:364] duration metric: took 356.125µs to acquireMachinesLock for "default-k8s-diff-port-193000"
	I1209 03:45:21.965262   10568 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:45:21.965283   10568 fix.go:54] fixHost starting: 
	I1209 03:45:21.966053   10568 fix.go:112] recreateIfNeeded on default-k8s-diff-port-193000: state=Stopped err=<nil>
	W1209 03:45:21.966081   10568 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:45:21.975514   10568 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-193000" ...
	I1209 03:45:21.979684   10568 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:45:21.979990   10568 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ec:56:64:9c:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/default-k8s-diff-port-193000/disk.qcow2
	I1209 03:45:21.990545   10568 main.go:141] libmachine: STDOUT: 
	I1209 03:45:21.990621   10568 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:45:21.990766   10568 fix.go:56] duration metric: took 25.480583ms for fixHost
	I1209 03:45:21.990797   10568 start.go:83] releasing machines lock for "default-k8s-diff-port-193000", held for 25.649917ms
	W1209 03:45:21.991120   10568 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-193000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-193000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:21.998542   10568 out.go:201] 
	W1209 03:45:22.002610   10568 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:45:22.002665   10568 out.go:270] * 
	* 
	W1209 03:45:22.005030   10568 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:45:22.013465   10568 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-193000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000: exit status 7 (74.056208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-193000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000: exit status 7 (35.550417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-193000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-193000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-193000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.868375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-193000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-193000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000: exit status 7 (34.176667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-193000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000: exit status 7 (34.264459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-193000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-193000 --alsologtostderr -v=1: exit status 83 (46.367625ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-193000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:45:22.307521   10591 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:45:22.307713   10591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:22.307716   10591 out.go:358] Setting ErrFile to fd 2...
	I1209 03:45:22.307719   10591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:22.307878   10591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:45:22.308111   10591 out.go:352] Setting JSON to false
	I1209 03:45:22.308118   10591 mustload.go:65] Loading cluster: default-k8s-diff-port-193000
	I1209 03:45:22.308340   10591 config.go:182] Loaded profile config "default-k8s-diff-port-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:45:22.311810   10591 out.go:177] * The control-plane node default-k8s-diff-port-193000 host is not running: state=Stopped
	I1209 03:45:22.315860   10591 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-193000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-193000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000: exit status 7 (33.067208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-193000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000: exit status 7 (33.829708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-193000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-402000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-402000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.963211458s)

                                                
                                                
-- stdout --
	* [newest-cni-402000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-402000" primary control-plane node in "newest-cni-402000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-402000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:45:22.644456   10608 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:45:22.644643   10608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:22.644646   10608 out.go:358] Setting ErrFile to fd 2...
	I1209 03:45:22.644648   10608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:22.644788   10608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:45:22.646429   10608 out.go:352] Setting JSON to false
	I1209 03:45:22.664918   10608 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6293,"bootTime":1733738429,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:45:22.665029   10608 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:45:22.669904   10608 out.go:177] * [newest-cni-402000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:45:22.676935   10608 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:45:22.676970   10608 notify.go:220] Checking for updates...
	I1209 03:45:22.684790   10608 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:45:22.687888   10608 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:45:22.691668   10608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:45:22.694860   10608 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:45:22.697892   10608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:45:22.701187   10608 config.go:182] Loaded profile config "cert-expiration-299000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:45:22.701248   10608 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:45:22.701302   10608 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:45:22.705936   10608 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:45:22.712837   10608 start.go:297] selected driver: qemu2
	I1209 03:45:22.712842   10608 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:45:22.712847   10608 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:45:22.715526   10608 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1209 03:45:22.715567   10608 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1209 03:45:22.718863   10608 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:45:22.725902   10608 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 03:45:22.725919   10608 cni.go:84] Creating CNI manager for ""
	I1209 03:45:22.725941   10608 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:45:22.725945   10608 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:45:22.725977   10608 start.go:340] cluster config:
	{Name:newest-cni-402000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-402000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:45:22.731075   10608 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:45:22.739859   10608 out.go:177] * Starting "newest-cni-402000" primary control-plane node in "newest-cni-402000" cluster
	I1209 03:45:22.743730   10608 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:45:22.743747   10608 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:45:22.743759   10608 cache.go:56] Caching tarball of preloaded images
	I1209 03:45:22.743843   10608 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:45:22.743849   10608 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:45:22.743912   10608 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/newest-cni-402000/config.json ...
	I1209 03:45:22.743925   10608 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/newest-cni-402000/config.json: {Name:mkfcc48fe71d7e4f551e9fe8dade2cea6a335ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:45:22.744411   10608 start.go:360] acquireMachinesLock for newest-cni-402000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:45:22.744464   10608 start.go:364] duration metric: took 46.292µs to acquireMachinesLock for "newest-cni-402000"
	I1209 03:45:22.744477   10608 start.go:93] Provisioning new machine with config: &{Name:newest-cni-402000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-402000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:45:22.744528   10608 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:45:22.752855   10608 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:45:22.771446   10608 start.go:159] libmachine.API.Create for "newest-cni-402000" (driver="qemu2")
	I1209 03:45:22.771477   10608 client.go:168] LocalClient.Create starting
	I1209 03:45:22.771569   10608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:45:22.771611   10608 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:22.771625   10608 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:22.771667   10608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:45:22.771700   10608 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:22.771709   10608 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:22.772273   10608 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:45:22.934891   10608 main.go:141] libmachine: Creating SSH key...
	I1209 03:45:22.968306   10608 main.go:141] libmachine: Creating Disk image...
	I1209 03:45:22.968317   10608 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:45:22.968583   10608 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2
	I1209 03:45:22.978555   10608 main.go:141] libmachine: STDOUT: 
	I1209 03:45:22.978585   10608 main.go:141] libmachine: STDERR: 
	I1209 03:45:22.978644   10608 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2 +20000M
	I1209 03:45:22.987096   10608 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:45:22.987109   10608 main.go:141] libmachine: STDERR: 
	I1209 03:45:22.987124   10608 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2
	I1209 03:45:22.987130   10608 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:45:22.987144   10608 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:45:22.987178   10608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:b0:59:19:a8:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2
	I1209 03:45:22.988962   10608 main.go:141] libmachine: STDOUT: 
	I1209 03:45:22.988975   10608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:45:22.988994   10608 client.go:171] duration metric: took 217.518ms to LocalClient.Create
	I1209 03:45:24.991136   10608 start.go:128] duration metric: took 2.246623333s to createHost
	I1209 03:45:24.991180   10608 start.go:83] releasing machines lock for "newest-cni-402000", held for 2.246748625s
	W1209 03:45:24.991247   10608 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:25.007448   10608 out.go:177] * Deleting "newest-cni-402000" in qemu2 ...
	W1209 03:45:25.039617   10608 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:25.039641   10608 start.go:729] Will try again in 5 seconds ...
	I1209 03:45:30.041708   10608 start.go:360] acquireMachinesLock for newest-cni-402000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:45:30.042275   10608 start.go:364] duration metric: took 467.583µs to acquireMachinesLock for "newest-cni-402000"
	I1209 03:45:30.042439   10608 start.go:93] Provisioning new machine with config: &{Name:newest-cni-402000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-402000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:45:30.042762   10608 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:45:30.059573   10608 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 03:45:30.109572   10608 start.go:159] libmachine.API.Create for "newest-cni-402000" (driver="qemu2")
	I1209 03:45:30.109615   10608 client.go:168] LocalClient.Create starting
	I1209 03:45:30.109742   10608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:45:30.109822   10608 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:30.109848   10608 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:30.109906   10608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:45:30.109965   10608 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:30.109976   10608 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:30.110727   10608 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:45:30.285241   10608 main.go:141] libmachine: Creating SSH key...
	I1209 03:45:30.500750   10608 main.go:141] libmachine: Creating Disk image...
	I1209 03:45:30.500760   10608 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:45:30.501067   10608 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2
	I1209 03:45:30.511701   10608 main.go:141] libmachine: STDOUT: 
	I1209 03:45:30.511719   10608 main.go:141] libmachine: STDERR: 
	I1209 03:45:30.511779   10608 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2 +20000M
	I1209 03:45:30.520518   10608 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:45:30.520539   10608 main.go:141] libmachine: STDERR: 
	I1209 03:45:30.520550   10608 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2
	I1209 03:45:30.520556   10608 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:45:30.520563   10608 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:45:30.520604   10608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:89:88:f0:72:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2
	I1209 03:45:30.522481   10608 main.go:141] libmachine: STDOUT: 
	I1209 03:45:30.522494   10608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:45:30.522507   10608 client.go:171] duration metric: took 412.894041ms to LocalClient.Create
	I1209 03:45:32.524642   10608 start.go:128] duration metric: took 2.481889916s to createHost
	I1209 03:45:32.524706   10608 start.go:83] releasing machines lock for "newest-cni-402000", held for 2.482449375s
	W1209 03:45:32.525199   10608 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-402000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-402000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:32.540847   10608 out.go:201] 
	W1209 03:45:32.546108   10608 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:45:32.546151   10608 out.go:270] * 
	* 
	W1209 03:45:32.548574   10608 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:45:32.560895   10608 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-402000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-402000 -n newest-cni-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-402000 -n newest-cni-402000: exit status 7 (73.643333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-402000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-402000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-402000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.198759083s)

                                                
                                                
-- stdout --
	* [newest-cni-402000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-402000" primary control-plane node in "newest-cni-402000" cluster
	* Restarting existing qemu2 VM for "newest-cni-402000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-402000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:45:36.153851   10648 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:45:36.154007   10648 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:36.154010   10648 out.go:358] Setting ErrFile to fd 2...
	I1209 03:45:36.154013   10648 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:36.154163   10648 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:45:36.155266   10648 out.go:352] Setting JSON to false
	I1209 03:45:36.173849   10648 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6307,"bootTime":1733738429,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:45:36.173932   10648 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:45:36.178971   10648 out.go:177] * [newest-cni-402000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:45:36.186032   10648 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:45:36.186065   10648 notify.go:220] Checking for updates...
	I1209 03:45:36.197649   10648 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:45:36.199070   10648 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:45:36.202001   10648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:45:36.205049   10648 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:45:36.208035   10648 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:45:36.211305   10648 config.go:182] Loaded profile config "newest-cni-402000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:45:36.211579   10648 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:45:36.216043   10648 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:45:36.222894   10648 start.go:297] selected driver: qemu2
	I1209 03:45:36.222900   10648 start.go:901] validating driver "qemu2" against &{Name:newest-cni-402000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:newest-cni-402000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:45:36.222962   10648 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:45:36.225687   10648 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 03:45:36.225714   10648 cni.go:84] Creating CNI manager for ""
	I1209 03:45:36.225742   10648 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:45:36.225793   10648 start.go:340] cluster config:
	{Name:newest-cni-402000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-402000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:45:36.230372   10648 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:45:36.238997   10648 out.go:177] * Starting "newest-cni-402000" primary control-plane node in "newest-cni-402000" cluster
	I1209 03:45:36.242056   10648 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:45:36.242072   10648 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:45:36.242084   10648 cache.go:56] Caching tarball of preloaded images
	I1209 03:45:36.242156   10648 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:45:36.242162   10648 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:45:36.242236   10648 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/newest-cni-402000/config.json ...
	I1209 03:45:36.242742   10648 start.go:360] acquireMachinesLock for newest-cni-402000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:45:36.242772   10648 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "newest-cni-402000"
	I1209 03:45:36.242781   10648 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:45:36.242786   10648 fix.go:54] fixHost starting: 
	I1209 03:45:36.242904   10648 fix.go:112] recreateIfNeeded on newest-cni-402000: state=Stopped err=<nil>
	W1209 03:45:36.242913   10648 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:45:36.246937   10648 out.go:177] * Restarting existing qemu2 VM for "newest-cni-402000" ...
	I1209 03:45:36.254959   10648 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:45:36.255001   10648 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:89:88:f0:72:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2
	I1209 03:45:36.257312   10648 main.go:141] libmachine: STDOUT: 
	I1209 03:45:36.257332   10648 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:45:36.257363   10648 fix.go:56] duration metric: took 14.576125ms for fixHost
	I1209 03:45:36.257367   10648 start.go:83] releasing machines lock for "newest-cni-402000", held for 14.590958ms
	W1209 03:45:36.257373   10648 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:45:36.257412   10648 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:36.257417   10648 start.go:729] Will try again in 5 seconds ...
	I1209 03:45:41.259521   10648 start.go:360] acquireMachinesLock for newest-cni-402000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:45:41.259909   10648 start.go:364] duration metric: took 311.459µs to acquireMachinesLock for "newest-cni-402000"
	I1209 03:45:41.260029   10648 start.go:96] Skipping create...Using existing machine configuration
	I1209 03:45:41.260047   10648 fix.go:54] fixHost starting: 
	I1209 03:45:41.260707   10648 fix.go:112] recreateIfNeeded on newest-cni-402000: state=Stopped err=<nil>
	W1209 03:45:41.260731   10648 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 03:45:41.270315   10648 out.go:177] * Restarting existing qemu2 VM for "newest-cni-402000" ...
	I1209 03:45:41.275354   10648 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:45:41.275575   10648 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:89:88:f0:72:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/newest-cni-402000/disk.qcow2
	I1209 03:45:41.285452   10648 main.go:141] libmachine: STDOUT: 
	I1209 03:45:41.285542   10648 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:45:41.285624   10648 fix.go:56] duration metric: took 25.576792ms for fixHost
	I1209 03:45:41.285640   10648 start.go:83] releasing machines lock for "newest-cni-402000", held for 25.707958ms
	W1209 03:45:41.285857   10648 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-402000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-402000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:41.293290   10648 out.go:201] 
	W1209 03:45:41.297387   10648 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:45:41.297416   10648 out.go:270] * 
	* 
	W1209 03:45:41.300211   10648 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:45:41.307353   10648 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-402000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-402000 -n newest-cni-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-402000 -n newest-cni-402000: exit status 7 (71.427917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-402000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-402000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-402000 -n newest-cni-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-402000 -n newest-cni-402000: exit status 7 (34.129166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-402000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-402000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-402000 --alsologtostderr -v=1: exit status 83 (44.478917ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-402000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-402000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:45:41.505337   10662 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:45:41.505522   10662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:41.505528   10662 out.go:358] Setting ErrFile to fd 2...
	I1209 03:45:41.505531   10662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:41.505662   10662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:45:41.505906   10662 out.go:352] Setting JSON to false
	I1209 03:45:41.505913   10662 mustload.go:65] Loading cluster: newest-cni-402000
	I1209 03:45:41.506123   10662 config.go:182] Loaded profile config "newest-cni-402000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:45:41.509964   10662 out.go:177] * The control-plane node newest-cni-402000 host is not running: state=Stopped
	I1209 03:45:41.513869   10662 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-402000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-402000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-402000 -n newest-cni-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-402000 -n newest-cni-402000: exit status 7 (33.553125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-402000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-402000 -n newest-cni-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-402000 -n newest-cni-402000: exit status 7 (34.25625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-402000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.927151291s)

                                                
                                                
-- stdout --
	* [auto-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-557000" primary control-plane node in "auto-557000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-557000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:45:41.846936   10679 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:45:41.847111   10679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:41.847114   10679 out.go:358] Setting ErrFile to fd 2...
	I1209 03:45:41.847117   10679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:41.847248   10679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:45:41.848398   10679 out.go:352] Setting JSON to false
	I1209 03:45:41.866039   10679 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6312,"bootTime":1733738429,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:45:41.866121   10679 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:45:41.870926   10679 out.go:177] * [auto-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:45:41.878792   10679 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:45:41.878850   10679 notify.go:220] Checking for updates...
	I1209 03:45:41.885838   10679 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:45:41.888804   10679 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:45:41.891836   10679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:45:41.893401   10679 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:45:41.896851   10679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:45:41.900230   10679 config.go:182] Loaded profile config "cert-expiration-299000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:45:41.900288   10679 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:45:41.900348   10679 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:45:41.904639   10679 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:45:41.911873   10679 start.go:297] selected driver: qemu2
	I1209 03:45:41.911879   10679 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:45:41.911886   10679 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:45:41.914490   10679 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:45:41.917846   10679 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:45:41.921906   10679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:45:41.921928   10679 cni.go:84] Creating CNI manager for ""
	I1209 03:45:41.921952   10679 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:45:41.921961   10679 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:45:41.922002   10679 start.go:340] cluster config:
	{Name:auto-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:45:41.926665   10679 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:45:41.934866   10679 out.go:177] * Starting "auto-557000" primary control-plane node in "auto-557000" cluster
	I1209 03:45:41.938861   10679 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:45:41.938875   10679 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:45:41.938886   10679 cache.go:56] Caching tarball of preloaded images
	I1209 03:45:41.938963   10679 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:45:41.938969   10679 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:45:41.939023   10679 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/auto-557000/config.json ...
	I1209 03:45:41.939034   10679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/auto-557000/config.json: {Name:mk3eea9086da1339159f8cf6289d60900021f5eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:45:41.939514   10679 start.go:360] acquireMachinesLock for auto-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:45:41.939565   10679 start.go:364] duration metric: took 44.666µs to acquireMachinesLock for "auto-557000"
	I1209 03:45:41.939577   10679 start.go:93] Provisioning new machine with config: &{Name:auto-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:45:41.939611   10679 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:45:41.947822   10679 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:45:41.965580   10679 start.go:159] libmachine.API.Create for "auto-557000" (driver="qemu2")
	I1209 03:45:41.965612   10679 client.go:168] LocalClient.Create starting
	I1209 03:45:41.965687   10679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:45:41.965728   10679 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:41.965744   10679 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:41.965783   10679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:45:41.965814   10679 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:41.965823   10679 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:41.966310   10679 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:45:42.127757   10679 main.go:141] libmachine: Creating SSH key...
	I1209 03:45:42.223523   10679 main.go:141] libmachine: Creating Disk image...
	I1209 03:45:42.223528   10679 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:45:42.223760   10679 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/disk.qcow2
	I1209 03:45:42.233846   10679 main.go:141] libmachine: STDOUT: 
	I1209 03:45:42.233867   10679 main.go:141] libmachine: STDERR: 
	I1209 03:45:42.233933   10679 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/disk.qcow2 +20000M
	I1209 03:45:42.242439   10679 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:45:42.242452   10679 main.go:141] libmachine: STDERR: 
	I1209 03:45:42.242471   10679 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/disk.qcow2
	I1209 03:45:42.242476   10679 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:45:42.242489   10679 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:45:42.242518   10679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:2d:25:cf:be:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/disk.qcow2
	I1209 03:45:42.244370   10679 main.go:141] libmachine: STDOUT: 
	I1209 03:45:42.244385   10679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:45:42.244404   10679 client.go:171] duration metric: took 278.790125ms to LocalClient.Create
	I1209 03:45:44.246538   10679 start.go:128] duration metric: took 2.306947791s to createHost
	I1209 03:45:44.246604   10679 start.go:83] releasing machines lock for "auto-557000", held for 2.307073625s
	W1209 03:45:44.246656   10679 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:44.263909   10679 out.go:177] * Deleting "auto-557000" in qemu2 ...
	W1209 03:45:44.292535   10679 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:44.292557   10679 start.go:729] Will try again in 5 seconds ...
	I1209 03:45:49.294761   10679 start.go:360] acquireMachinesLock for auto-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:45:49.295329   10679 start.go:364] duration metric: took 440.292µs to acquireMachinesLock for "auto-557000"
	I1209 03:45:49.295476   10679 start.go:93] Provisioning new machine with config: &{Name:auto-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:45:49.295765   10679 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:45:49.313377   10679 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:45:49.362200   10679 start.go:159] libmachine.API.Create for "auto-557000" (driver="qemu2")
	I1209 03:45:49.362255   10679 client.go:168] LocalClient.Create starting
	I1209 03:45:49.362403   10679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:45:49.362496   10679 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:49.362515   10679 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:49.362576   10679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:45:49.362635   10679 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:49.362649   10679 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:49.363688   10679 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:45:49.537596   10679 main.go:141] libmachine: Creating SSH key...
	I1209 03:45:49.669046   10679 main.go:141] libmachine: Creating Disk image...
	I1209 03:45:49.669058   10679 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:45:49.669345   10679 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/disk.qcow2
	I1209 03:45:49.679372   10679 main.go:141] libmachine: STDOUT: 
	I1209 03:45:49.679391   10679 main.go:141] libmachine: STDERR: 
	I1209 03:45:49.679455   10679 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/disk.qcow2 +20000M
	I1209 03:45:49.687923   10679 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:45:49.687942   10679 main.go:141] libmachine: STDERR: 
	I1209 03:45:49.687955   10679 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/disk.qcow2
	I1209 03:45:49.687960   10679 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:45:49.687970   10679 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:45:49.688006   10679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:b3:fb:8c:13:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/auto-557000/disk.qcow2
	I1209 03:45:49.689774   10679 main.go:141] libmachine: STDOUT: 
	I1209 03:45:49.689787   10679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:45:49.689797   10679 client.go:171] duration metric: took 327.543666ms to LocalClient.Create
	I1209 03:45:51.691960   10679 start.go:128] duration metric: took 2.396201791s to createHost
	I1209 03:45:51.692044   10679 start.go:83] releasing machines lock for "auto-557000", held for 2.396736958s
	W1209 03:45:51.692422   10679 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:51.709254   10679 out.go:201] 
	W1209 03:45:51.713584   10679 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:45:51.713619   10679 out.go:270] * 
	* 
	W1209 03:45:51.716861   10679 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:45:51.728069   10679 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.849104709s)

                                                
                                                
-- stdout --
	* [kindnet-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-557000" primary control-plane node in "kindnet-557000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-557000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:45:54.107508   10790 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:45:54.107687   10790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:54.107690   10790 out.go:358] Setting ErrFile to fd 2...
	I1209 03:45:54.107693   10790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:45:54.107845   10790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:45:54.108959   10790 out.go:352] Setting JSON to false
	I1209 03:45:54.126603   10790 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6325,"bootTime":1733738429,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:45:54.126676   10790 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:45:54.132541   10790 out.go:177] * [kindnet-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:45:54.140458   10790 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:45:54.140508   10790 notify.go:220] Checking for updates...
	I1209 03:45:54.148348   10790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:45:54.152332   10790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:45:54.156212   10790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:45:54.159360   10790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:45:54.162443   10790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:45:54.165756   10790 config.go:182] Loaded profile config "cert-expiration-299000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:45:54.165834   10790 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:45:54.165877   10790 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:45:54.170381   10790 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:45:54.177353   10790 start.go:297] selected driver: qemu2
	I1209 03:45:54.177359   10790 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:45:54.177366   10790 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:45:54.179918   10790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:45:54.183373   10790 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:45:54.186428   10790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:45:54.186448   10790 cni.go:84] Creating CNI manager for "kindnet"
	I1209 03:45:54.186453   10790 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 03:45:54.186498   10790 start.go:340] cluster config:
	{Name:kindnet-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:45:54.191656   10790 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:45:54.200342   10790 out.go:177] * Starting "kindnet-557000" primary control-plane node in "kindnet-557000" cluster
	I1209 03:45:54.204438   10790 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:45:54.204454   10790 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:45:54.204467   10790 cache.go:56] Caching tarball of preloaded images
	I1209 03:45:54.204544   10790 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:45:54.204554   10790 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:45:54.204628   10790 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/kindnet-557000/config.json ...
	I1209 03:45:54.204639   10790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/kindnet-557000/config.json: {Name:mke9b0f28e9fd64a5751a3bbb4e1e052b6959a0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:45:54.205124   10790 start.go:360] acquireMachinesLock for kindnet-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:45:54.205178   10790 start.go:364] duration metric: took 48.125µs to acquireMachinesLock for "kindnet-557000"
	I1209 03:45:54.205190   10790 start.go:93] Provisioning new machine with config: &{Name:kindnet-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:45:54.205226   10790 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:45:54.210404   10790 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:45:54.228938   10790 start.go:159] libmachine.API.Create for "kindnet-557000" (driver="qemu2")
	I1209 03:45:54.228962   10790 client.go:168] LocalClient.Create starting
	I1209 03:45:54.229034   10790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:45:54.229074   10790 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:54.229084   10790 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:54.229121   10790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:45:54.229151   10790 main.go:141] libmachine: Decoding PEM data...
	I1209 03:45:54.229163   10790 main.go:141] libmachine: Parsing certificate...
	I1209 03:45:54.229628   10790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:45:54.391811   10790 main.go:141] libmachine: Creating SSH key...
	I1209 03:45:54.461739   10790 main.go:141] libmachine: Creating Disk image...
	I1209 03:45:54.461747   10790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:45:54.461987   10790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/disk.qcow2
	I1209 03:45:54.471774   10790 main.go:141] libmachine: STDOUT: 
	I1209 03:45:54.471794   10790 main.go:141] libmachine: STDERR: 
	I1209 03:45:54.471855   10790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/disk.qcow2 +20000M
	I1209 03:45:54.480241   10790 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:45:54.480254   10790 main.go:141] libmachine: STDERR: 
	I1209 03:45:54.480266   10790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/disk.qcow2
	I1209 03:45:54.480271   10790 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:45:54.480285   10790 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:45:54.480321   10790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:aa:ea:51:54:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/disk.qcow2
	I1209 03:45:54.482132   10790 main.go:141] libmachine: STDOUT: 
	I1209 03:45:54.482145   10790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:45:54.482163   10790 client.go:171] duration metric: took 253.200042ms to LocalClient.Create
	I1209 03:45:56.484379   10790 start.go:128] duration metric: took 2.279092833s to createHost
	I1209 03:45:56.484436   10790 start.go:83] releasing machines lock for "kindnet-557000", held for 2.279290959s
	W1209 03:45:56.484486   10790 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:56.496902   10790 out.go:177] * Deleting "kindnet-557000" in qemu2 ...
	W1209 03:45:56.530178   10790 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:45:56.530209   10790 start.go:729] Will try again in 5 seconds ...
	I1209 03:46:01.532296   10790 start.go:360] acquireMachinesLock for kindnet-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:46:01.532795   10790 start.go:364] duration metric: took 424.542µs to acquireMachinesLock for "kindnet-557000"
	I1209 03:46:01.532887   10790 start.go:93] Provisioning new machine with config: &{Name:kindnet-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:46:01.533090   10790 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:46:01.551017   10790 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:46:01.601995   10790 start.go:159] libmachine.API.Create for "kindnet-557000" (driver="qemu2")
	I1209 03:46:01.602040   10790 client.go:168] LocalClient.Create starting
	I1209 03:46:01.602181   10790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:46:01.602269   10790 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:01.602288   10790 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:01.602350   10790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:46:01.602408   10790 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:01.602419   10790 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:01.603139   10790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:46:01.777107   10790 main.go:141] libmachine: Creating SSH key...
	I1209 03:46:01.856480   10790 main.go:141] libmachine: Creating Disk image...
	I1209 03:46:01.856486   10790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:46:01.856716   10790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/disk.qcow2
	I1209 03:46:01.866551   10790 main.go:141] libmachine: STDOUT: 
	I1209 03:46:01.866571   10790 main.go:141] libmachine: STDERR: 
	I1209 03:46:01.866627   10790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/disk.qcow2 +20000M
	I1209 03:46:01.875151   10790 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:46:01.875164   10790 main.go:141] libmachine: STDERR: 
	I1209 03:46:01.875175   10790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/disk.qcow2
	I1209 03:46:01.875179   10790 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:46:01.875190   10790 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:46:01.875227   10790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:4c:ee:be:fe:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kindnet-557000/disk.qcow2
	I1209 03:46:01.877062   10790 main.go:141] libmachine: STDOUT: 
	I1209 03:46:01.877075   10790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:46:01.877085   10790 client.go:171] duration metric: took 275.045834ms to LocalClient.Create
	I1209 03:46:03.879225   10790 start.go:128] duration metric: took 2.346148666s to createHost
	I1209 03:46:03.879286   10790 start.go:83] releasing machines lock for "kindnet-557000", held for 2.346510875s
	W1209 03:46:03.879758   10790 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:03.896437   10790 out.go:201] 
	W1209 03:46:03.901338   10790 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:46:03.901362   10790 out.go:270] * 
	* 
	W1209 03:46:03.903903   10790 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:46:03.911450   10790 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.073381167s)

                                                
                                                
-- stdout --
	* [calico-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-557000" primary control-plane node in "calico-557000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-557000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:46:06.403798   10907 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:46:06.403962   10907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:46:06.403965   10907 out.go:358] Setting ErrFile to fd 2...
	I1209 03:46:06.403968   10907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:46:06.404112   10907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:46:06.405358   10907 out.go:352] Setting JSON to false
	I1209 03:46:06.423131   10907 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6337,"bootTime":1733738429,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:46:06.423201   10907 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:46:06.429116   10907 out.go:177] * [calico-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:46:06.436930   10907 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:46:06.437002   10907 notify.go:220] Checking for updates...
	I1209 03:46:06.443942   10907 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:46:06.446915   10907 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:46:06.449966   10907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:46:06.451278   10907 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:46:06.453965   10907 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:46:06.457306   10907 config.go:182] Loaded profile config "cert-expiration-299000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:46:06.457384   10907 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:46:06.457439   10907 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:46:06.459022   10907 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:46:06.467192   10907 start.go:297] selected driver: qemu2
	I1209 03:46:06.467201   10907 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:46:06.467209   10907 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:46:06.469715   10907 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:46:06.472981   10907 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:46:06.477024   10907 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:46:06.477047   10907 cni.go:84] Creating CNI manager for "calico"
	I1209 03:46:06.477056   10907 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1209 03:46:06.477104   10907 start.go:340] cluster config:
	{Name:calico-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:46:06.481730   10907 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:46:06.489976   10907 out.go:177] * Starting "calico-557000" primary control-plane node in "calico-557000" cluster
	I1209 03:46:06.493972   10907 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:46:06.493996   10907 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:46:06.494006   10907 cache.go:56] Caching tarball of preloaded images
	I1209 03:46:06.494085   10907 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:46:06.494091   10907 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:46:06.494149   10907 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/calico-557000/config.json ...
	I1209 03:46:06.494162   10907 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/calico-557000/config.json: {Name:mkb474ad3d86b4f817bc5c03ce8ae5a2c77822c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:46:06.494645   10907 start.go:360] acquireMachinesLock for calico-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:46:06.494696   10907 start.go:364] duration metric: took 45.166µs to acquireMachinesLock for "calico-557000"
	I1209 03:46:06.494708   10907 start.go:93] Provisioning new machine with config: &{Name:calico-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:46:06.494746   10907 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:46:06.503922   10907 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:46:06.521963   10907 start.go:159] libmachine.API.Create for "calico-557000" (driver="qemu2")
	I1209 03:46:06.521996   10907 client.go:168] LocalClient.Create starting
	I1209 03:46:06.522075   10907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:46:06.522124   10907 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:06.522139   10907 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:06.522176   10907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:46:06.522210   10907 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:06.522221   10907 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:06.522741   10907 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:46:06.685147   10907 main.go:141] libmachine: Creating SSH key...
	I1209 03:46:06.896735   10907 main.go:141] libmachine: Creating Disk image...
	I1209 03:46:06.896743   10907 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:46:06.897017   10907 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/disk.qcow2
	I1209 03:46:06.907456   10907 main.go:141] libmachine: STDOUT: 
	I1209 03:46:06.907481   10907 main.go:141] libmachine: STDERR: 
	I1209 03:46:06.907535   10907 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/disk.qcow2 +20000M
	I1209 03:46:06.916087   10907 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:46:06.916103   10907 main.go:141] libmachine: STDERR: 
	I1209 03:46:06.916126   10907 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/disk.qcow2
	I1209 03:46:06.916133   10907 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:46:06.916146   10907 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:46:06.916176   10907 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:d0:bd:3b:9d:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/disk.qcow2
	I1209 03:46:06.917945   10907 main.go:141] libmachine: STDOUT: 
	I1209 03:46:06.917958   10907 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:46:06.917977   10907 client.go:171] duration metric: took 395.983041ms to LocalClient.Create
	I1209 03:46:08.920121   10907 start.go:128] duration metric: took 2.425399125s to createHost
	I1209 03:46:08.920245   10907 start.go:83] releasing machines lock for "calico-557000", held for 2.425584334s
	W1209 03:46:08.920302   10907 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:08.935424   10907 out.go:177] * Deleting "calico-557000" in qemu2 ...
	W1209 03:46:08.964253   10907 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:08.964287   10907 start.go:729] Will try again in 5 seconds ...
	I1209 03:46:13.966390   10907 start.go:360] acquireMachinesLock for calico-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:46:13.967001   10907 start.go:364] duration metric: took 469.25µs to acquireMachinesLock for "calico-557000"
	I1209 03:46:13.967129   10907 start.go:93] Provisioning new machine with config: &{Name:calico-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:46:13.967418   10907 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:46:13.985724   10907 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:46:14.035178   10907 start.go:159] libmachine.API.Create for "calico-557000" (driver="qemu2")
	I1209 03:46:14.035234   10907 client.go:168] LocalClient.Create starting
	I1209 03:46:14.035350   10907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:46:14.035424   10907 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:14.035445   10907 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:14.035505   10907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:46:14.035565   10907 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:14.035584   10907 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:14.036276   10907 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:46:14.211644   10907 main.go:141] libmachine: Creating SSH key...
	I1209 03:46:14.372701   10907 main.go:141] libmachine: Creating Disk image...
	I1209 03:46:14.372715   10907 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:46:14.372950   10907 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/disk.qcow2
	I1209 03:46:14.383203   10907 main.go:141] libmachine: STDOUT: 
	I1209 03:46:14.383238   10907 main.go:141] libmachine: STDERR: 
	I1209 03:46:14.383303   10907 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/disk.qcow2 +20000M
	I1209 03:46:14.391735   10907 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:46:14.391755   10907 main.go:141] libmachine: STDERR: 
	I1209 03:46:14.391777   10907 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/disk.qcow2
	I1209 03:46:14.391784   10907 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:46:14.391791   10907 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:46:14.391826   10907 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:7f:4b:5f:8a:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/calico-557000/disk.qcow2
	I1209 03:46:14.393593   10907 main.go:141] libmachine: STDOUT: 
	I1209 03:46:14.393611   10907 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:46:14.393625   10907 client.go:171] duration metric: took 358.392208ms to LocalClient.Create
	I1209 03:46:16.394535   10907 start.go:128] duration metric: took 2.42712225s to createHost
	I1209 03:46:16.394597   10907 start.go:83] releasing machines lock for "calico-557000", held for 2.427618708s
	W1209 03:46:16.395003   10907 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:16.409812   10907 out.go:201] 
	W1209 03:46:16.414891   10907 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:46:16.414916   10907 out.go:270] * 
	* 
	W1209 03:46:16.417474   10907 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:46:16.432756   10907 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (10.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (10.088402625s)

                                                
                                                
-- stdout --
	* [custom-flannel-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-557000" primary control-plane node in "custom-flannel-557000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-557000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:46:19.026394   11024 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:46:19.026542   11024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:46:19.026545   11024 out.go:358] Setting ErrFile to fd 2...
	I1209 03:46:19.026548   11024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:46:19.026668   11024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:46:19.027780   11024 out.go:352] Setting JSON to false
	I1209 03:46:19.045321   11024 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6350,"bootTime":1733738429,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:46:19.045389   11024 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:46:19.052620   11024 out.go:177] * [custom-flannel-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:46:19.061271   11024 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:46:19.061293   11024 notify.go:220] Checking for updates...
	I1209 03:46:19.069582   11024 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:46:19.073559   11024 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:46:19.076592   11024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:46:19.079632   11024 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:46:19.082664   11024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:46:19.086377   11024 config.go:182] Loaded profile config "cert-expiration-299000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:46:19.086465   11024 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:46:19.086521   11024 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:46:19.090642   11024 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:46:19.097555   11024 start.go:297] selected driver: qemu2
	I1209 03:46:19.097561   11024 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:46:19.097567   11024 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:46:19.100193   11024 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:46:19.103625   11024 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:46:19.106625   11024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:46:19.106643   11024 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1209 03:46:19.106659   11024 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1209 03:46:19.106695   11024 start.go:340] cluster config:
	{Name:custom-flannel-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:46:19.111792   11024 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:46:19.120457   11024 out.go:177] * Starting "custom-flannel-557000" primary control-plane node in "custom-flannel-557000" cluster
	I1209 03:46:19.124605   11024 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:46:19.124625   11024 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:46:19.124634   11024 cache.go:56] Caching tarball of preloaded images
	I1209 03:46:19.124710   11024 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:46:19.124716   11024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:46:19.124779   11024 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/custom-flannel-557000/config.json ...
	I1209 03:46:19.124791   11024 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/custom-flannel-557000/config.json: {Name:mkaa1f2c9c7c9871e913a57c038b35937051fdf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:46:19.125247   11024 start.go:360] acquireMachinesLock for custom-flannel-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:46:19.125300   11024 start.go:364] duration metric: took 44.958µs to acquireMachinesLock for "custom-flannel-557000"
	I1209 03:46:19.125313   11024 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:46:19.125356   11024 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:46:19.132623   11024 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:46:19.151446   11024 start.go:159] libmachine.API.Create for "custom-flannel-557000" (driver="qemu2")
	I1209 03:46:19.151472   11024 client.go:168] LocalClient.Create starting
	I1209 03:46:19.151548   11024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:46:19.151588   11024 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:19.151601   11024 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:19.151643   11024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:46:19.151676   11024 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:19.151684   11024 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:19.152251   11024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:46:19.314242   11024 main.go:141] libmachine: Creating SSH key...
	I1209 03:46:19.621886   11024 main.go:141] libmachine: Creating Disk image...
	I1209 03:46:19.621898   11024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:46:19.622173   11024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/disk.qcow2
	I1209 03:46:19.632733   11024 main.go:141] libmachine: STDOUT: 
	I1209 03:46:19.632761   11024 main.go:141] libmachine: STDERR: 
	I1209 03:46:19.632839   11024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/disk.qcow2 +20000M
	I1209 03:46:19.641400   11024 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:46:19.641438   11024 main.go:141] libmachine: STDERR: 
	I1209 03:46:19.641453   11024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/disk.qcow2
	I1209 03:46:19.641458   11024 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:46:19.641468   11024 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:46:19.641506   11024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:4f:76:3f:42:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/disk.qcow2
	I1209 03:46:19.643287   11024 main.go:141] libmachine: STDOUT: 
	I1209 03:46:19.643300   11024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:46:19.643318   11024 client.go:171] duration metric: took 491.850125ms to LocalClient.Create
	I1209 03:46:21.645506   11024 start.go:128] duration metric: took 2.520162292s to createHost
	I1209 03:46:21.645583   11024 start.go:83] releasing machines lock for "custom-flannel-557000", held for 2.5203195s
	W1209 03:46:21.645710   11024 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:21.655850   11024 out.go:177] * Deleting "custom-flannel-557000" in qemu2 ...
	W1209 03:46:21.688882   11024 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:21.688919   11024 start.go:729] Will try again in 5 seconds ...
	I1209 03:46:26.691021   11024 start.go:360] acquireMachinesLock for custom-flannel-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:46:26.691729   11024 start.go:364] duration metric: took 557.583µs to acquireMachinesLock for "custom-flannel-557000"
	I1209 03:46:26.691857   11024 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:46:26.692363   11024 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:46:26.711839   11024 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:46:26.761061   11024 start.go:159] libmachine.API.Create for "custom-flannel-557000" (driver="qemu2")
	I1209 03:46:26.761113   11024 client.go:168] LocalClient.Create starting
	I1209 03:46:26.761233   11024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:46:26.761307   11024 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:26.761326   11024 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:26.761383   11024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:46:26.761439   11024 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:26.761457   11024 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:26.762194   11024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:46:26.936335   11024 main.go:141] libmachine: Creating SSH key...
	I1209 03:46:27.016046   11024 main.go:141] libmachine: Creating Disk image...
	I1209 03:46:27.016055   11024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:46:27.016310   11024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/disk.qcow2
	I1209 03:46:27.026081   11024 main.go:141] libmachine: STDOUT: 
	I1209 03:46:27.026103   11024 main.go:141] libmachine: STDERR: 
	I1209 03:46:27.026170   11024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/disk.qcow2 +20000M
	I1209 03:46:27.034542   11024 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:46:27.034572   11024 main.go:141] libmachine: STDERR: 
	I1209 03:46:27.034587   11024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/disk.qcow2
	I1209 03:46:27.034593   11024 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:46:27.034601   11024 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:46:27.034634   11024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:4e:2f:f4:f8:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/custom-flannel-557000/disk.qcow2
	I1209 03:46:27.036484   11024 main.go:141] libmachine: STDOUT: 
	I1209 03:46:27.036497   11024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:46:27.036520   11024 client.go:171] duration metric: took 275.406833ms to LocalClient.Create
	I1209 03:46:29.038649   11024 start.go:128] duration metric: took 2.346298625s to createHost
	I1209 03:46:29.038695   11024 start.go:83] releasing machines lock for "custom-flannel-557000", held for 2.346970916s
	W1209 03:46:29.039067   11024 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:29.047755   11024 out.go:201] 
	W1209 03:46:29.057948   11024 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:46:29.057979   11024 out.go:270] * 
	* 
	W1209 03:46:29.060590   11024 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:46:29.068851   11024 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (10.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.943261708s)

                                                
                                                
-- stdout --
	* [false-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-557000" primary control-plane node in "false-557000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-557000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:46:31.638621   11145 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:46:31.638784   11145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:46:31.638787   11145 out.go:358] Setting ErrFile to fd 2...
	I1209 03:46:31.638790   11145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:46:31.638922   11145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:46:31.640007   11145 out.go:352] Setting JSON to false
	I1209 03:46:31.657540   11145 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6362,"bootTime":1733738429,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:46:31.657620   11145 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:46:31.664616   11145 out.go:177] * [false-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:46:31.673112   11145 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:46:31.673151   11145 notify.go:220] Checking for updates...
	I1209 03:46:31.681222   11145 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:46:31.685132   11145 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:46:31.688187   11145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:46:31.691216   11145 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:46:31.694148   11145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:46:31.697533   11145 config.go:182] Loaded profile config "cert-expiration-299000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:46:31.697610   11145 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:46:31.697683   11145 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:46:31.702148   11145 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:46:31.709102   11145 start.go:297] selected driver: qemu2
	I1209 03:46:31.709108   11145 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:46:31.709115   11145 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:46:31.711719   11145 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:46:31.715245   11145 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:46:31.718240   11145 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:46:31.718256   11145 cni.go:84] Creating CNI manager for "false"
	I1209 03:46:31.718284   11145 start.go:340] cluster config:
	{Name:false-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:false-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:46:31.723246   11145 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:46:31.731156   11145 out.go:177] * Starting "false-557000" primary control-plane node in "false-557000" cluster
	I1209 03:46:31.735117   11145 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:46:31.735137   11145 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:46:31.735146   11145 cache.go:56] Caching tarball of preloaded images
	I1209 03:46:31.735228   11145 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:46:31.735234   11145 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:46:31.735297   11145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/false-557000/config.json ...
	I1209 03:46:31.735313   11145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/false-557000/config.json: {Name:mk4d7dd82ccd7612c4aa9dc7481d639792525e26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:46:31.735767   11145 start.go:360] acquireMachinesLock for false-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:46:31.735818   11145 start.go:364] duration metric: took 44.25µs to acquireMachinesLock for "false-557000"
	I1209 03:46:31.735830   11145 start.go:93] Provisioning new machine with config: &{Name:false-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:46:31.735864   11145 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:46:31.744187   11145 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:46:31.761389   11145 start.go:159] libmachine.API.Create for "false-557000" (driver="qemu2")
	I1209 03:46:31.761415   11145 client.go:168] LocalClient.Create starting
	I1209 03:46:31.761496   11145 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:46:31.761536   11145 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:31.761546   11145 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:31.761583   11145 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:46:31.761614   11145 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:31.761623   11145 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:31.762188   11145 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:46:31.925139   11145 main.go:141] libmachine: Creating SSH key...
	I1209 03:46:31.973423   11145 main.go:141] libmachine: Creating Disk image...
	I1209 03:46:31.973428   11145 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:46:31.973649   11145 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/disk.qcow2
	I1209 03:46:31.983411   11145 main.go:141] libmachine: STDOUT: 
	I1209 03:46:31.983431   11145 main.go:141] libmachine: STDERR: 
	I1209 03:46:31.983490   11145 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/disk.qcow2 +20000M
	I1209 03:46:31.991869   11145 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:46:31.991884   11145 main.go:141] libmachine: STDERR: 
	I1209 03:46:31.991901   11145 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/disk.qcow2
	I1209 03:46:31.991904   11145 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:46:31.991918   11145 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:46:31.991951   11145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:f7:be:f7:4f:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/disk.qcow2
	I1209 03:46:31.993767   11145 main.go:141] libmachine: STDOUT: 
	I1209 03:46:31.993782   11145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:46:31.993799   11145 client.go:171] duration metric: took 232.383542ms to LocalClient.Create
	I1209 03:46:33.995936   11145 start.go:128] duration metric: took 2.260091833s to createHost
	I1209 03:46:33.995993   11145 start.go:83] releasing machines lock for "false-557000", held for 2.260208459s
	W1209 03:46:33.996043   11145 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:34.014488   11145 out.go:177] * Deleting "false-557000" in qemu2 ...
	W1209 03:46:34.043125   11145 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:34.043147   11145 start.go:729] Will try again in 5 seconds ...
	I1209 03:46:39.045419   11145 start.go:360] acquireMachinesLock for false-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:46:39.046065   11145 start.go:364] duration metric: took 511.417µs to acquireMachinesLock for "false-557000"
	I1209 03:46:39.046191   11145 start.go:93] Provisioning new machine with config: &{Name:false-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:46:39.046530   11145 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:46:39.065038   11145 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:46:39.112721   11145 start.go:159] libmachine.API.Create for "false-557000" (driver="qemu2")
	I1209 03:46:39.112784   11145 client.go:168] LocalClient.Create starting
	I1209 03:46:39.112903   11145 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:46:39.112974   11145 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:39.112990   11145 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:39.113047   11145 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:46:39.113102   11145 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:39.113118   11145 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:39.113657   11145 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:46:39.286791   11145 main.go:141] libmachine: Creating SSH key...
	I1209 03:46:39.474581   11145 main.go:141] libmachine: Creating Disk image...
	I1209 03:46:39.474588   11145 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:46:39.474862   11145 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/disk.qcow2
	I1209 03:46:39.485172   11145 main.go:141] libmachine: STDOUT: 
	I1209 03:46:39.485195   11145 main.go:141] libmachine: STDERR: 
	I1209 03:46:39.485272   11145 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/disk.qcow2 +20000M
	I1209 03:46:39.493854   11145 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:46:39.493870   11145 main.go:141] libmachine: STDERR: 
	I1209 03:46:39.493879   11145 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/disk.qcow2
	I1209 03:46:39.493892   11145 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:46:39.493903   11145 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:46:39.493927   11145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:4d:b8:90:d4:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/false-557000/disk.qcow2
	I1209 03:46:39.495702   11145 main.go:141] libmachine: STDOUT: 
	I1209 03:46:39.495717   11145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:46:39.495729   11145 client.go:171] duration metric: took 382.947708ms to LocalClient.Create
	I1209 03:46:41.497856   11145 start.go:128] duration metric: took 2.4513395s to createHost
	I1209 03:46:41.498002   11145 start.go:83] releasing machines lock for "false-557000", held for 2.451908834s
	W1209 03:46:41.498354   11145 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:41.512974   11145 out.go:201] 
	W1209 03:46:41.518102   11145 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:46:41.518130   11145 out.go:270] * 
	* 
	W1209 03:46:41.520964   11145 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:46:41.534922   11145 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.969377042s)

                                                
                                                
-- stdout --
	* [enable-default-cni-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-557000" primary control-plane node in "enable-default-cni-557000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-557000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:46:43.826401   11256 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:46:43.826548   11256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:46:43.826551   11256 out.go:358] Setting ErrFile to fd 2...
	I1209 03:46:43.826553   11256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:46:43.826681   11256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:46:43.827794   11256 out.go:352] Setting JSON to false
	I1209 03:46:43.845686   11256 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6374,"bootTime":1733738429,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:46:43.845764   11256 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:46:43.850753   11256 out.go:177] * [enable-default-cni-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:46:43.858806   11256 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:46:43.858860   11256 notify.go:220] Checking for updates...
	I1209 03:46:43.866730   11256 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:46:43.869749   11256 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:46:43.872835   11256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:46:43.875790   11256 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:46:43.878765   11256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:46:43.882210   11256 config.go:182] Loaded profile config "cert-expiration-299000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:46:43.882288   11256 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:46:43.882332   11256 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:46:43.887503   11256 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:46:43.895700   11256 start.go:297] selected driver: qemu2
	I1209 03:46:43.895706   11256 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:46:43.895712   11256 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:46:43.898639   11256 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:46:43.902791   11256 out.go:177] * Automatically selected the socket_vmnet network
	E1209 03:46:43.905814   11256 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1209 03:46:43.905825   11256 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:46:43.905843   11256 cni.go:84] Creating CNI manager for "bridge"
	I1209 03:46:43.905847   11256 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:46:43.905877   11256 start.go:340] cluster config:
	{Name:enable-default-cni-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:46:43.910602   11256 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:46:43.918800   11256 out.go:177] * Starting "enable-default-cni-557000" primary control-plane node in "enable-default-cni-557000" cluster
	I1209 03:46:43.922743   11256 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:46:43.922759   11256 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:46:43.922769   11256 cache.go:56] Caching tarball of preloaded images
	I1209 03:46:43.922849   11256 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:46:43.922855   11256 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:46:43.922920   11256 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/enable-default-cni-557000/config.json ...
	I1209 03:46:43.922932   11256 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/enable-default-cni-557000/config.json: {Name:mk0d6f89535f360eb7f0d13c4b6d6065ad7a7b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:46:43.923402   11256 start.go:360] acquireMachinesLock for enable-default-cni-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:46:43.923453   11256 start.go:364] duration metric: took 44.75µs to acquireMachinesLock for "enable-default-cni-557000"
	I1209 03:46:43.923469   11256 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:46:43.923504   11256 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:46:43.931679   11256 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:46:43.950254   11256 start.go:159] libmachine.API.Create for "enable-default-cni-557000" (driver="qemu2")
	I1209 03:46:43.950285   11256 client.go:168] LocalClient.Create starting
	I1209 03:46:43.950359   11256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:46:43.950399   11256 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:43.950410   11256 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:43.950452   11256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:46:43.950484   11256 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:43.950491   11256 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:43.950993   11256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:46:44.112859   11256 main.go:141] libmachine: Creating SSH key...
	I1209 03:46:44.340111   11256 main.go:141] libmachine: Creating Disk image...
	I1209 03:46:44.340121   11256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:46:44.340398   11256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/disk.qcow2
	I1209 03:46:44.350966   11256 main.go:141] libmachine: STDOUT: 
	I1209 03:46:44.350985   11256 main.go:141] libmachine: STDERR: 
	I1209 03:46:44.351043   11256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/disk.qcow2 +20000M
	I1209 03:46:44.359542   11256 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:46:44.359560   11256 main.go:141] libmachine: STDERR: 
	I1209 03:46:44.359576   11256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/disk.qcow2
	I1209 03:46:44.359582   11256 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:46:44.359600   11256 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:46:44.359634   11256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:18:2a:a2:88:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/disk.qcow2
	I1209 03:46:44.361424   11256 main.go:141] libmachine: STDOUT: 
	I1209 03:46:44.361442   11256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:46:44.361460   11256 client.go:171] duration metric: took 411.177042ms to LocalClient.Create
	I1209 03:46:46.363606   11256 start.go:128] duration metric: took 2.440118334s to createHost
	I1209 03:46:46.363723   11256 start.go:83] releasing machines lock for "enable-default-cni-557000", held for 2.440304041s
	W1209 03:46:46.363776   11256 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:46.376156   11256 out.go:177] * Deleting "enable-default-cni-557000" in qemu2 ...
	W1209 03:46:46.406416   11256 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:46.406442   11256 start.go:729] Will try again in 5 seconds ...
	I1209 03:46:51.408683   11256 start.go:360] acquireMachinesLock for enable-default-cni-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:46:51.409251   11256 start.go:364] duration metric: took 458.708µs to acquireMachinesLock for "enable-default-cni-557000"
	I1209 03:46:51.409397   11256 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:46:51.409615   11256 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:46:51.428028   11256 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:46:51.476802   11256 start.go:159] libmachine.API.Create for "enable-default-cni-557000" (driver="qemu2")
	I1209 03:46:51.476869   11256 client.go:168] LocalClient.Create starting
	I1209 03:46:51.477004   11256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:46:51.477091   11256 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:51.477107   11256 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:51.477180   11256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:46:51.477237   11256 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:51.477250   11256 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:51.478109   11256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:46:51.652082   11256 main.go:141] libmachine: Creating SSH key...
	I1209 03:46:51.692566   11256 main.go:141] libmachine: Creating Disk image...
	I1209 03:46:51.692572   11256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:46:51.692801   11256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/disk.qcow2
	I1209 03:46:51.702568   11256 main.go:141] libmachine: STDOUT: 
	I1209 03:46:51.702589   11256 main.go:141] libmachine: STDERR: 
	I1209 03:46:51.702650   11256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/disk.qcow2 +20000M
	I1209 03:46:51.711008   11256 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:46:51.711027   11256 main.go:141] libmachine: STDERR: 
	I1209 03:46:51.711042   11256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/disk.qcow2
	I1209 03:46:51.711045   11256 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:46:51.711055   11256 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:46:51.711082   11256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:ac:d8:4a:a5:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/enable-default-cni-557000/disk.qcow2
	I1209 03:46:51.712903   11256 main.go:141] libmachine: STDOUT: 
	I1209 03:46:51.712927   11256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:46:51.712939   11256 client.go:171] duration metric: took 236.069375ms to LocalClient.Create
	I1209 03:46:53.715076   11256 start.go:128] duration metric: took 2.305444916s to createHost
	I1209 03:46:53.715137   11256 start.go:83] releasing machines lock for "enable-default-cni-557000", held for 2.305903625s
	W1209 03:46:53.715477   11256 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:53.730073   11256 out.go:201] 
	W1209 03:46:53.733150   11256 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:46:53.733204   11256 out.go:270] * 
	* 
	W1209 03:46:53.735658   11256 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:46:53.748030   11256 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.873413833s)

                                                
                                                
-- stdout --
	* [flannel-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-557000" primary control-plane node in "flannel-557000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-557000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:46:56.049046   11371 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:46:56.049196   11371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:46:56.049199   11371 out.go:358] Setting ErrFile to fd 2...
	I1209 03:46:56.049201   11371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:46:56.049338   11371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:46:56.050513   11371 out.go:352] Setting JSON to false
	I1209 03:46:56.068333   11371 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6387,"bootTime":1733738429,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:46:56.068403   11371 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:46:56.075405   11371 out.go:177] * [flannel-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:46:56.083037   11371 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:46:56.083101   11371 notify.go:220] Checking for updates...
	I1209 03:46:56.091446   11371 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:46:56.094384   11371 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:46:56.097331   11371 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:46:56.100357   11371 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:46:56.103290   11371 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:46:56.106737   11371 config.go:182] Loaded profile config "cert-expiration-299000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:46:56.106830   11371 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:46:56.106879   11371 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:46:56.111340   11371 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:46:56.118332   11371 start.go:297] selected driver: qemu2
	I1209 03:46:56.118339   11371 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:46:56.118351   11371 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:46:56.120945   11371 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:46:56.123364   11371 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:46:56.124775   11371 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:46:56.124794   11371 cni.go:84] Creating CNI manager for "flannel"
	I1209 03:46:56.124800   11371 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1209 03:46:56.124825   11371 start.go:340] cluster config:
	{Name:flannel-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:46:56.129386   11371 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:46:56.137348   11371 out.go:177] * Starting "flannel-557000" primary control-plane node in "flannel-557000" cluster
	I1209 03:46:56.141370   11371 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:46:56.141391   11371 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:46:56.141405   11371 cache.go:56] Caching tarball of preloaded images
	I1209 03:46:56.141493   11371 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:46:56.141499   11371 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:46:56.141564   11371 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/flannel-557000/config.json ...
	I1209 03:46:56.141576   11371 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/flannel-557000/config.json: {Name:mkf198daff768333be0b9f00c52b56052800bac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:46:56.142014   11371 start.go:360] acquireMachinesLock for flannel-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:46:56.142065   11371 start.go:364] duration metric: took 45.166µs to acquireMachinesLock for "flannel-557000"
	I1209 03:46:56.142078   11371 start.go:93] Provisioning new machine with config: &{Name:flannel-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:46:56.142113   11371 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:46:56.150303   11371 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:46:56.168940   11371 start.go:159] libmachine.API.Create for "flannel-557000" (driver="qemu2")
	I1209 03:46:56.168973   11371 client.go:168] LocalClient.Create starting
	I1209 03:46:56.169060   11371 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:46:56.169105   11371 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:56.169120   11371 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:56.169156   11371 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:46:56.169186   11371 main.go:141] libmachine: Decoding PEM data...
	I1209 03:46:56.169195   11371 main.go:141] libmachine: Parsing certificate...
	I1209 03:46:56.169685   11371 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:46:56.332918   11371 main.go:141] libmachine: Creating SSH key...
	I1209 03:46:56.427452   11371 main.go:141] libmachine: Creating Disk image...
	I1209 03:46:56.427457   11371 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:46:56.427684   11371 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/disk.qcow2
	I1209 03:46:56.437432   11371 main.go:141] libmachine: STDOUT: 
	I1209 03:46:56.437453   11371 main.go:141] libmachine: STDERR: 
	I1209 03:46:56.437525   11371 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/disk.qcow2 +20000M
	I1209 03:46:56.446024   11371 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:46:56.446036   11371 main.go:141] libmachine: STDERR: 
	I1209 03:46:56.446050   11371 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/disk.qcow2
	I1209 03:46:56.446065   11371 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:46:56.446075   11371 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:46:56.446104   11371 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:53:ce:da:89:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/disk.qcow2
	I1209 03:46:56.447843   11371 main.go:141] libmachine: STDOUT: 
	I1209 03:46:56.447858   11371 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:46:56.447890   11371 client.go:171] duration metric: took 278.917084ms to LocalClient.Create
	I1209 03:46:58.450103   11371 start.go:128] duration metric: took 2.307944458s to createHost
	I1209 03:46:58.450213   11371 start.go:83] releasing machines lock for "flannel-557000", held for 2.308153542s
	W1209 03:46:58.450283   11371 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:58.466577   11371 out.go:177] * Deleting "flannel-557000" in qemu2 ...
	W1209 03:46:58.497401   11371 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:46:58.497421   11371 start.go:729] Will try again in 5 seconds ...
	I1209 03:47:03.499711   11371 start.go:360] acquireMachinesLock for flannel-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:47:03.500136   11371 start.go:364] duration metric: took 343.125µs to acquireMachinesLock for "flannel-557000"
	I1209 03:47:03.500275   11371 start.go:93] Provisioning new machine with config: &{Name:flannel-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:47:03.500577   11371 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:47:03.506140   11371 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:47:03.558114   11371 start.go:159] libmachine.API.Create for "flannel-557000" (driver="qemu2")
	I1209 03:47:03.558196   11371 client.go:168] LocalClient.Create starting
	I1209 03:47:03.558401   11371 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:47:03.558490   11371 main.go:141] libmachine: Decoding PEM data...
	I1209 03:47:03.558511   11371 main.go:141] libmachine: Parsing certificate...
	I1209 03:47:03.558576   11371 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:47:03.558643   11371 main.go:141] libmachine: Decoding PEM data...
	I1209 03:47:03.558660   11371 main.go:141] libmachine: Parsing certificate...
	I1209 03:47:03.559297   11371 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:47:03.735874   11371 main.go:141] libmachine: Creating SSH key...
	I1209 03:47:03.815228   11371 main.go:141] libmachine: Creating Disk image...
	I1209 03:47:03.815235   11371 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:47:03.815471   11371 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/disk.qcow2
	I1209 03:47:03.825143   11371 main.go:141] libmachine: STDOUT: 
	I1209 03:47:03.825164   11371 main.go:141] libmachine: STDERR: 
	I1209 03:47:03.825231   11371 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/disk.qcow2 +20000M
	I1209 03:47:03.833696   11371 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:47:03.833713   11371 main.go:141] libmachine: STDERR: 
	I1209 03:47:03.833727   11371 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/disk.qcow2
	I1209 03:47:03.833735   11371 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:47:03.833750   11371 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:47:03.833785   11371 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:a2:b4:f7:c8:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/flannel-557000/disk.qcow2
	I1209 03:47:03.835562   11371 main.go:141] libmachine: STDOUT: 
	I1209 03:47:03.835577   11371 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:47:03.835589   11371 client.go:171] duration metric: took 277.392583ms to LocalClient.Create
	I1209 03:47:05.837911   11371 start.go:128] duration metric: took 2.33730975s to createHost
	I1209 03:47:05.837999   11371 start.go:83] releasing machines lock for "flannel-557000", held for 2.337883667s
	W1209 03:47:05.838416   11371 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:47:05.853183   11371 out.go:201] 
	W1209 03:47:05.857335   11371 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:47:05.857363   11371 out.go:270] * 
	* 
	W1209 03:47:05.859853   11371 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:47:05.876143   11371 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.086304666s)

                                                
                                                
-- stdout --
	* [bridge-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-557000" primary control-plane node in "bridge-557000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-557000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:47:07.395284   11459 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:47:07.395452   11459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:47:07.395456   11459 out.go:358] Setting ErrFile to fd 2...
	I1209 03:47:07.395458   11459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:47:07.395587   11459 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:47:07.396874   11459 out.go:352] Setting JSON to false
	I1209 03:47:07.416800   11459 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6398,"bootTime":1733738429,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:47:07.416886   11459 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:47:07.421008   11459 out.go:177] * [bridge-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:47:07.428090   11459 notify.go:220] Checking for updates...
	I1209 03:47:07.431848   11459 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:47:07.440503   11459 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:47:07.450855   11459 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:47:07.453986   11459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:47:07.457051   11459 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:47:07.460001   11459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:47:07.463276   11459 config.go:182] Loaded profile config "flannel-557000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:47:07.463355   11459 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:47:07.463415   11459 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:47:07.465962   11459 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:47:07.472916   11459 start.go:297] selected driver: qemu2
	I1209 03:47:07.472922   11459 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:47:07.472929   11459 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:47:07.475474   11459 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:47:07.477957   11459 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:47:07.481082   11459 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:47:07.481106   11459 cni.go:84] Creating CNI manager for "bridge"
	I1209 03:47:07.481109   11459 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:47:07.481162   11459 start.go:340] cluster config:
	{Name:bridge-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:47:07.485902   11459 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:47:07.493936   11459 out.go:177] * Starting "bridge-557000" primary control-plane node in "bridge-557000" cluster
	I1209 03:47:07.497977   11459 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:47:07.498016   11459 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:47:07.498027   11459 cache.go:56] Caching tarball of preloaded images
	I1209 03:47:07.498123   11459 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:47:07.498130   11459 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:47:07.498190   11459 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/bridge-557000/config.json ...
	I1209 03:47:07.498200   11459 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/bridge-557000/config.json: {Name:mk9742096a70d93d2e66f8873270959718482c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:47:07.498508   11459 start.go:360] acquireMachinesLock for bridge-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:47:07.498551   11459 start.go:364] duration metric: took 38.167µs to acquireMachinesLock for "bridge-557000"
	I1209 03:47:07.498562   11459 start.go:93] Provisioning new machine with config: &{Name:bridge-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:47:07.498589   11459 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:47:07.503051   11459 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:47:07.518891   11459 start.go:159] libmachine.API.Create for "bridge-557000" (driver="qemu2")
	I1209 03:47:07.518919   11459 client.go:168] LocalClient.Create starting
	I1209 03:47:07.518989   11459 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:47:07.519024   11459 main.go:141] libmachine: Decoding PEM data...
	I1209 03:47:07.519038   11459 main.go:141] libmachine: Parsing certificate...
	I1209 03:47:07.519074   11459 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:47:07.519102   11459 main.go:141] libmachine: Decoding PEM data...
	I1209 03:47:07.519115   11459 main.go:141] libmachine: Parsing certificate...
	I1209 03:47:07.519485   11459 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:47:07.774595   11459 main.go:141] libmachine: Creating SSH key...
	I1209 03:47:07.838344   11459 main.go:141] libmachine: Creating Disk image...
	I1209 03:47:07.838354   11459 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:47:07.838593   11459 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/disk.qcow2
	I1209 03:47:07.849328   11459 main.go:141] libmachine: STDOUT: 
	I1209 03:47:07.849356   11459 main.go:141] libmachine: STDERR: 
	I1209 03:47:07.849431   11459 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/disk.qcow2 +20000M
	I1209 03:47:07.858886   11459 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:47:07.858909   11459 main.go:141] libmachine: STDERR: 
	I1209 03:47:07.858924   11459 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/disk.qcow2
	I1209 03:47:07.858930   11459 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:47:07.858942   11459 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:47:07.858977   11459 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:a3:6d:56:7c:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/disk.qcow2
	I1209 03:47:07.860874   11459 main.go:141] libmachine: STDOUT: 
	I1209 03:47:07.860893   11459 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:47:07.860914   11459 client.go:171] duration metric: took 341.995459ms to LocalClient.Create
	I1209 03:47:09.863108   11459 start.go:128] duration metric: took 2.364539583s to createHost
	I1209 03:47:09.863217   11459 start.go:83] releasing machines lock for "bridge-557000", held for 2.36470075s
	W1209 03:47:09.863320   11459 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:47:09.881623   11459 out.go:177] * Deleting "bridge-557000" in qemu2 ...
	W1209 03:47:09.906462   11459 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:47:09.906486   11459 start.go:729] Will try again in 5 seconds ...
	I1209 03:47:14.908680   11459 start.go:360] acquireMachinesLock for bridge-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:47:14.909358   11459 start.go:364] duration metric: took 561.291µs to acquireMachinesLock for "bridge-557000"
	I1209 03:47:14.909543   11459 start.go:93] Provisioning new machine with config: &{Name:bridge-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:47:14.909897   11459 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:47:14.919403   11459 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:47:14.968003   11459 start.go:159] libmachine.API.Create for "bridge-557000" (driver="qemu2")
	I1209 03:47:14.968048   11459 client.go:168] LocalClient.Create starting
	I1209 03:47:14.968253   11459 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:47:14.968354   11459 main.go:141] libmachine: Decoding PEM data...
	I1209 03:47:14.968379   11459 main.go:141] libmachine: Parsing certificate...
	I1209 03:47:14.968450   11459 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:47:14.968516   11459 main.go:141] libmachine: Decoding PEM data...
	I1209 03:47:14.968529   11459 main.go:141] libmachine: Parsing certificate...
	I1209 03:47:14.969123   11459 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:47:15.140955   11459 main.go:141] libmachine: Creating SSH key...
	I1209 03:47:15.381610   11459 main.go:141] libmachine: Creating Disk image...
	I1209 03:47:15.381619   11459 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:47:15.381879   11459 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/disk.qcow2
	I1209 03:47:15.392416   11459 main.go:141] libmachine: STDOUT: 
	I1209 03:47:15.392438   11459 main.go:141] libmachine: STDERR: 
	I1209 03:47:15.392498   11459 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/disk.qcow2 +20000M
	I1209 03:47:15.400952   11459 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:47:15.400970   11459 main.go:141] libmachine: STDERR: 
	I1209 03:47:15.400982   11459 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/disk.qcow2
	I1209 03:47:15.400986   11459 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:47:15.400996   11459 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:47:15.401032   11459 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:a6:96:ec:f8:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/bridge-557000/disk.qcow2
	I1209 03:47:15.402823   11459 main.go:141] libmachine: STDOUT: 
	I1209 03:47:15.402843   11459 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:47:15.402863   11459 client.go:171] duration metric: took 434.816959ms to LocalClient.Create
	I1209 03:47:17.404490   11459 start.go:128] duration metric: took 2.494576959s to createHost
	I1209 03:47:17.404558   11459 start.go:83] releasing machines lock for "bridge-557000", held for 2.495222916s
	W1209 03:47:17.404846   11459 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:47:17.424730   11459 out.go:201] 
	W1209 03:47:17.428864   11459 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:47:17.428889   11459 out.go:270] * 
	* 
	W1209 03:47:17.430715   11459 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:47:17.439631   11459 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-557000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (11.29497775s)

                                                
                                                
-- stdout --
	* [kubenet-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-557000" primary control-plane node in "kubenet-557000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-557000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:47:08.533595   11515 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:47:08.533737   11515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:47:08.533740   11515 out.go:358] Setting ErrFile to fd 2...
	I1209 03:47:08.533742   11515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:47:08.533889   11515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:47:08.534989   11515 out.go:352] Setting JSON to false
	I1209 03:47:08.552568   11515 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6399,"bootTime":1733738429,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:47:08.552640   11515 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:47:08.559015   11515 out.go:177] * [kubenet-557000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:47:08.567013   11515 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:47:08.567075   11515 notify.go:220] Checking for updates...
	I1209 03:47:08.573940   11515 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:47:08.576977   11515 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:47:08.580974   11515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:47:08.584006   11515 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:47:08.586944   11515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:47:08.590360   11515 config.go:182] Loaded profile config "bridge-557000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:47:08.590441   11515 config.go:182] Loaded profile config "multinode-263000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:47:08.590495   11515 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:47:08.593913   11515 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 03:47:08.600955   11515 start.go:297] selected driver: qemu2
	I1209 03:47:08.600962   11515 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:47:08.600979   11515 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:47:08.603585   11515 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:47:08.606921   11515 out.go:177] * Automatically selected the socket_vmnet network
	I1209 03:47:08.610031   11515 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 03:47:08.610058   11515 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1209 03:47:08.610092   11515 start.go:340] cluster config:
	{Name:kubenet-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubenet-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:47:08.614894   11515 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:47:08.622957   11515 out.go:177] * Starting "kubenet-557000" primary control-plane node in "kubenet-557000" cluster
	I1209 03:47:08.626922   11515 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:47:08.626942   11515 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:47:08.626960   11515 cache.go:56] Caching tarball of preloaded images
	I1209 03:47:08.627038   11515 preload.go:172] Found /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 03:47:08.627044   11515 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:47:08.627109   11515 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/kubenet-557000/config.json ...
	I1209 03:47:08.627120   11515 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/kubenet-557000/config.json: {Name:mkb589e047ee6c7aaa4c1f36f3bab14801b960be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:47:08.627582   11515 start.go:360] acquireMachinesLock for kubenet-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:47:09.863376   11515 start.go:364] duration metric: took 1.235794625s to acquireMachinesLock for "kubenet-557000"
	I1209 03:47:09.863555   11515 start.go:93] Provisioning new machine with config: &{Name:kubenet-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:47:09.863764   11515 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:47:09.873604   11515 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:47:09.924698   11515 start.go:159] libmachine.API.Create for "kubenet-557000" (driver="qemu2")
	I1209 03:47:09.924746   11515 client.go:168] LocalClient.Create starting
	I1209 03:47:09.924935   11515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:47:09.925009   11515 main.go:141] libmachine: Decoding PEM data...
	I1209 03:47:09.925028   11515 main.go:141] libmachine: Parsing certificate...
	I1209 03:47:09.925101   11515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:47:09.925158   11515 main.go:141] libmachine: Decoding PEM data...
	I1209 03:47:09.925178   11515 main.go:141] libmachine: Parsing certificate...
	I1209 03:47:09.926032   11515 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:47:10.098859   11515 main.go:141] libmachine: Creating SSH key...
	I1209 03:47:10.227133   11515 main.go:141] libmachine: Creating Disk image...
	I1209 03:47:10.227139   11515 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:47:10.227380   11515 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/disk.qcow2
	I1209 03:47:10.237697   11515 main.go:141] libmachine: STDOUT: 
	I1209 03:47:10.237717   11515 main.go:141] libmachine: STDERR: 
	I1209 03:47:10.237773   11515 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/disk.qcow2 +20000M
	I1209 03:47:10.246361   11515 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:47:10.246377   11515 main.go:141] libmachine: STDERR: 
	I1209 03:47:10.246394   11515 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/disk.qcow2
	I1209 03:47:10.246400   11515 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:47:10.246412   11515 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:47:10.246443   11515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:11:a5:01:75:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/disk.qcow2
	I1209 03:47:10.248293   11515 main.go:141] libmachine: STDOUT: 
	I1209 03:47:10.248308   11515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:47:10.248328   11515 client.go:171] duration metric: took 323.581417ms to LocalClient.Create
	I1209 03:47:12.250465   11515 start.go:128] duration metric: took 2.386716667s to createHost
	I1209 03:47:12.250604   11515 start.go:83] releasing machines lock for "kubenet-557000", held for 2.387176917s
	W1209 03:47:12.250657   11515 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:47:12.264963   11515 out.go:177] * Deleting "kubenet-557000" in qemu2 ...
	W1209 03:47:12.300876   11515 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:47:12.300907   11515 start.go:729] Will try again in 5 seconds ...
	I1209 03:47:17.303036   11515 start.go:360] acquireMachinesLock for kubenet-557000: {Name:mke66dc4d703739543a4a3caeb1655c8d31e5e1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 03:47:17.404687   11515 start.go:364] duration metric: took 101.553541ms to acquireMachinesLock for "kubenet-557000"
	I1209 03:47:17.404873   11515 start.go:93] Provisioning new machine with config: &{Name:kubenet-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 03:47:17.405149   11515 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 03:47:17.413741   11515 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 03:47:17.464986   11515 start.go:159] libmachine.API.Create for "kubenet-557000" (driver="qemu2")
	I1209 03:47:17.465040   11515 client.go:168] LocalClient.Create starting
	I1209 03:47:17.465204   11515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/ca.pem
	I1209 03:47:17.465259   11515 main.go:141] libmachine: Decoding PEM data...
	I1209 03:47:17.465278   11515 main.go:141] libmachine: Parsing certificate...
	I1209 03:47:17.465353   11515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20068-6536/.minikube/certs/cert.pem
	I1209 03:47:17.465402   11515 main.go:141] libmachine: Decoding PEM data...
	I1209 03:47:17.465416   11515 main.go:141] libmachine: Parsing certificate...
	I1209 03:47:17.466116   11515 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 03:47:17.638656   11515 main.go:141] libmachine: Creating SSH key...
	I1209 03:47:17.739163   11515 main.go:141] libmachine: Creating Disk image...
	I1209 03:47:17.739174   11515 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 03:47:17.739421   11515 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/disk.qcow2
	I1209 03:47:17.749913   11515 main.go:141] libmachine: STDOUT: 
	I1209 03:47:17.749943   11515 main.go:141] libmachine: STDERR: 
	I1209 03:47:17.750006   11515 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/disk.qcow2 +20000M
	I1209 03:47:17.759441   11515 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 03:47:17.759466   11515 main.go:141] libmachine: STDERR: 
	I1209 03:47:17.759480   11515 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/disk.qcow2
	I1209 03:47:17.759485   11515 main.go:141] libmachine: Starting QEMU VM...
	I1209 03:47:17.759495   11515 qemu.go:418] Using hvf for hardware acceleration
	I1209 03:47:17.759526   11515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:61:4e:44:5a:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20068-6536/.minikube/machines/kubenet-557000/disk.qcow2
	I1209 03:47:17.761608   11515 main.go:141] libmachine: STDOUT: 
	I1209 03:47:17.761623   11515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 03:47:17.761637   11515 client.go:171] duration metric: took 296.596667ms to LocalClient.Create
	I1209 03:47:19.763679   11515 start.go:128] duration metric: took 2.358554833s to createHost
	I1209 03:47:19.763693   11515 start.go:83] releasing machines lock for "kubenet-557000", held for 2.359017916s
	W1209 03:47:19.763773   11515 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 03:47:19.773614   11515 out.go:201] 
	W1209 03:47:19.777577   11515 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 03:47:19.777583   11515 out.go:270] * 
	* 
	W1209 03:47:19.778126   11515 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:47:19.785707   11515 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (11.30s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.2/json-events 10.38
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.09
18 TestDownloadOnly/v1.31.2/DeleteAll 0.11
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.31
39 TestErrorSpam/start 0.4
40 TestErrorSpam/status 0.11
41 TestErrorSpam/pause 0.13
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 10.01
46 TestFunctional/serial/CopySyncFile 0.01
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.99
55 TestFunctional/serial/CacheCmd/cache/add_local 0.99
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/parallel/ConfigCmd 0.25
71 TestFunctional/parallel/DryRun 0.29
72 TestFunctional/parallel/InternationalLanguage 0.12
78 TestFunctional/parallel/AddonsCmd 0.11
93 TestFunctional/parallel/License 0.23
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.65
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
126 TestFunctional/parallel/ProfileCmd/profile_list 0.09
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.05
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 2.08
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.21
193 TestMainNoArgs 0.04
227 TestStoppedBinaryUpgrade/Setup 1.01
229 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
240 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
258 TestNoKubernetes/serial/ProfileList 0.11
259 TestNoKubernetes/serial/Stop 1.93
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
266 TestStartStop/group/old-k8s-version/serial/Stop 3.34
267 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
277 TestStartStop/group/no-preload/serial/Stop 3
278 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
288 TestStartStop/group/embed-certs/serial/Stop 3.48
289 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.14
299 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.53
300 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.14
308 TestStartStop/group/newest-cni/serial/DeployApp 0
309 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
310 TestStartStop/group/newest-cni/serial/Stop 3.27
311 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
313 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
314 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1209 03:21:56.424795    7820 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1209 03:21:56.425174    7820 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-118000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-118000: exit status 85 (99.929125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-118000 | jenkins | v1.34.0 | 09 Dec 24 03:21 PST |          |
	|         | -p download-only-118000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 03:21:34
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 03:21:34.939211    7821 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:21:34.939415    7821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:21:34.939418    7821 out.go:358] Setting ErrFile to fd 2...
	I1209 03:21:34.939420    7821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:21:34.939542    7821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	W1209 03:21:34.939632    7821 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20068-6536/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20068-6536/.minikube/config/config.json: no such file or directory
	I1209 03:21:34.941051    7821 out.go:352] Setting JSON to true
	I1209 03:21:34.959055    7821 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4865,"bootTime":1733738429,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:21:34.959137    7821 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:21:34.964984    7821 out.go:97] [download-only-118000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:21:34.965106    7821 notify.go:220] Checking for updates...
	W1209 03:21:34.965148    7821 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 03:21:34.967939    7821 out.go:169] MINIKUBE_LOCATION=20068
	I1209 03:21:34.970992    7821 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:21:34.975918    7821 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:21:34.980021    7821 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:21:34.982944    7821 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	W1209 03:21:34.988970    7821 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 03:21:34.989181    7821 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:21:34.991865    7821 out.go:97] Using the qemu2 driver based on user configuration
	I1209 03:21:34.991885    7821 start.go:297] selected driver: qemu2
	I1209 03:21:34.991907    7821 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:21:34.992003    7821 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:21:34.994929    7821 out.go:169] Automatically selected the socket_vmnet network
	I1209 03:21:35.001384    7821 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1209 03:21:35.001467    7821 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 03:21:35.001538    7821 cni.go:84] Creating CNI manager for ""
	I1209 03:21:35.001568    7821 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1209 03:21:35.001630    7821 start.go:340] cluster config:
	{Name:download-only-118000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-118000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:21:35.006369    7821 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:21:35.010966    7821 out.go:97] Downloading VM boot image ...
	I1209 03:21:35.010980    7821 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso
	I1209 03:21:46.043159    7821 out.go:97] Starting "download-only-118000" primary control-plane node in "download-only-118000" cluster
	I1209 03:21:46.043179    7821 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 03:21:46.101072    7821 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 03:21:46.101112    7821 cache.go:56] Caching tarball of preloaded images
	I1209 03:21:46.101323    7821 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 03:21:46.105516    7821 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1209 03:21:46.105524    7821 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1209 03:21:46.185965    7821 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 03:21:55.113310    7821 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1209 03:21:55.113595    7821 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1209 03:21:55.807719    7821 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1209 03:21:55.807917    7821 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/download-only-118000/config.json ...
	I1209 03:21:55.807933    7821 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/download-only-118000/config.json: {Name:mkb9b1b4d0abc72f7eea8177d8ece2e4cb09aaf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:21:55.808196    7821 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 03:21:55.808486    7821 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1209 03:21:56.372849    7821 out.go:193] 
	W1209 03:21:56.380832    7821 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20068-6536/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1096dc320 0x1096dc320 0x1096dc320 0x1096dc320 0x1096dc320 0x1096dc320 0x1096dc320] Decompressors:map[bz2:0x14000514e40 gz:0x14000514e48 tar:0x14000514de0 tar.bz2:0x14000514df0 tar.gz:0x14000514e00 tar.xz:0x14000514e10 tar.zst:0x14000514e20 tbz2:0x14000514df0 tgz:0x14000514e00 txz:0x14000514e10 tzst:0x14000514e20 xz:0x14000514e50 zip:0x14000514e60 zst:0x14000514e58] Getters:map[file:0x14000888680 http:0x14000e1e230 https:0x14000e1e280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1209 03:21:56.380858    7821 out_reason.go:110] 
	W1209 03:21:56.388868    7821 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 03:21:56.392769    7821 out.go:193] 
	
	
	* The control-plane node download-only-118000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-118000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-118000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (10.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-912000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-912000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 : (10.382133042s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (10.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1209 03:22:07.186098    7820 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1209 03:22:07.186158    7820 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-912000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-912000: exit status 85 (85.208708ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-118000 | jenkins | v1.34.0 | 09 Dec 24 03:21 PST |                     |
	|         | -p download-only-118000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Dec 24 03:21 PST | 09 Dec 24 03:21 PST |
	| delete  | -p download-only-118000        | download-only-118000 | jenkins | v1.34.0 | 09 Dec 24 03:21 PST | 09 Dec 24 03:21 PST |
	| start   | -o=json --download-only        | download-only-912000 | jenkins | v1.34.0 | 09 Dec 24 03:21 PST |                     |
	|         | -p download-only-912000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 03:21:56
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 03:21:56.836154    7849 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:21:56.836312    7849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:21:56.836315    7849 out.go:358] Setting ErrFile to fd 2...
	I1209 03:21:56.836317    7849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:21:56.836439    7849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:21:56.837656    7849 out.go:352] Setting JSON to true
	I1209 03:21:56.855274    7849 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4887,"bootTime":1733738429,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:21:56.855354    7849 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:21:56.860157    7849 out.go:97] [download-only-912000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:21:56.860259    7849 notify.go:220] Checking for updates...
	I1209 03:21:56.864033    7849 out.go:169] MINIKUBE_LOCATION=20068
	I1209 03:21:56.867212    7849 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:21:56.868855    7849 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:21:56.872040    7849 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:21:56.875099    7849 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	W1209 03:21:56.881063    7849 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 03:21:56.881239    7849 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:21:56.884052    7849 out.go:97] Using the qemu2 driver based on user configuration
	I1209 03:21:56.884061    7849 start.go:297] selected driver: qemu2
	I1209 03:21:56.884065    7849 start.go:901] validating driver "qemu2" against <nil>
	I1209 03:21:56.884116    7849 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 03:21:56.887041    7849 out.go:169] Automatically selected the socket_vmnet network
	I1209 03:21:56.893332    7849 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1209 03:21:56.893425    7849 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 03:21:56.893447    7849 cni.go:84] Creating CNI manager for ""
	I1209 03:21:56.893469    7849 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 03:21:56.893478    7849 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 03:21:56.893522    7849 start.go:340] cluster config:
	{Name:download-only-912000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:21:56.897770    7849 iso.go:125] acquiring lock: {Name:mk591a3ea9cf1b4f98266f6aa4f8e84d1f00efa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 03:21:56.901083    7849 out.go:97] Starting "download-only-912000" primary control-plane node in "download-only-912000" cluster
	I1209 03:21:56.901089    7849 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:21:56.954197    7849 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:21:56.954211    7849 cache.go:56] Caching tarball of preloaded images
	I1209 03:21:56.954391    7849 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:21:56.957644    7849 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1209 03:21:56.957652    7849 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1209 03:21:57.035731    7849 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4?checksum=md5:5f3d7369b12f47138aa2863bb7bda916 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 03:22:05.269593    7849 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1209 03:22:05.269780    7849 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1209 03:22:05.792896    7849 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 03:22:05.793097    7849 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/download-only-912000/config.json ...
	I1209 03:22:05.793115    7849 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20068-6536/.minikube/profiles/download-only-912000/config.json: {Name:mk4d35c72a9dcbc348f387bb97ba37940117c202 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 03:22:05.793407    7849 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 03:22:05.793565    7849 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20068-6536/.minikube/cache/darwin/arm64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-912000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-912000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-912000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 03:22:07.726570    7820 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-952000 --alsologtostderr --binary-mirror http://127.0.0.1:60309 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-952000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-952000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-850000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-850000: exit status 85 (60.788041ms)

                                                
                                                
-- stdout --
	* Profile "addons-850000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-850000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-850000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-850000: exit status 85 (64.646292ms)

                                                
                                                
-- stdout --
	* Profile "addons-850000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-850000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.31s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1209 03:43:19.137742    7820 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 03:43:19.137933    7820 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1209 03:43:21.093029    7820 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1209 03:43:21.093275    7820 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1209 03:43:21.093334    7820 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/001/docker-machine-driver-hyperkit
I1209 03:43:21.587049    7820 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x108df96e0 0x108df96e0 0x108df96e0 0x108df96e0 0x108df96e0 0x108df96e0 0x108df96e0] Decompressors:map[bz2:0x14000522ce0 gz:0x14000522ce8 tar:0x14000522c70 tar.bz2:0x14000522c90 tar.gz:0x14000522ca0 tar.xz:0x14000522cb0 tar.zst:0x14000522cd0 tbz2:0x14000522c90 tgz:0x14000522ca0 txz:0x14000522cb0 tzst:0x14000522cd0 xz:0x14000522d10 zip:0x14000522d20 zst:0x14000522d18] Getters:map[file:0x1400153bd00 http:0x1400086ccd0 https:0x1400086cd20] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1209 03:43:21.587195    7820 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3995394887/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.31s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 status: exit status 7 (35.625583ms)

                                                
                                                
-- stdout --
	nospam-251000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 status: exit status 7 (34.89125ms)

                                                
                                                
-- stdout --
	nospam-251000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 status: exit status 7 (34.506375ms)

                                                
                                                
-- stdout --
	nospam-251000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.11s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 pause: exit status 83 (44.164042ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-251000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-251000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 pause: exit status 83 (44.7725ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-251000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-251000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 pause: exit status 83 (42.748958ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-251000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-251000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 unpause: exit status 83 (44.902291ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-251000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-251000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 unpause: exit status 83 (44.759875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-251000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-251000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 unpause: exit status 83 (42.793125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-251000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-251000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (10.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 stop: (3.073173416s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 stop: (3.684917167s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-251000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-251000 stop: (3.251167334s)
--- PASS: TestErrorSpam/stop (10.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/20068-6536/.minikube/files/etc/test/nested/copy/7820/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-174000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local672970537/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cache add minikube-local-cache-test:functional-174000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 cache delete minikube-local-cache-test:functional-174000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-174000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 config get cpus: exit status 14 (35.915334ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 config get cpus: exit status 14 (40.138625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-174000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-174000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (171.312125ms)

                                                
                                                
-- stdout --
	* [functional-174000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:23:44.754531    8414 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:23:44.754724    8414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:23:44.754728    8414 out.go:358] Setting ErrFile to fd 2...
	I1209 03:23:44.754731    8414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:23:44.754891    8414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:23:44.756218    8414 out.go:352] Setting JSON to false
	I1209 03:23:44.776530    8414 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4995,"bootTime":1733738429,"procs":554,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:23:44.776590    8414 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:23:44.782209    8414 out.go:177] * [functional-174000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 03:23:44.789045    8414 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:23:44.789076    8414 notify.go:220] Checking for updates...
	I1209 03:23:44.797185    8414 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:23:44.800122    8414 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:23:44.804141    8414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:23:44.807179    8414 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:23:44.810203    8414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:23:44.813454    8414 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:23:44.813799    8414 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:23:44.818190    8414 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 03:23:44.825098    8414 start.go:297] selected driver: qemu2
	I1209 03:23:44.825103    8414 start.go:901] validating driver "qemu2" against &{Name:functional-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:23:44.825153    8414 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:23:44.832193    8414 out.go:201] 
	W1209 03:23:44.836115    8414 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 03:23:44.840144    8414 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-174000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-174000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-174000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (122.973459ms)

                                                
                                                
-- stdout --
	* [functional-174000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 03:23:44.993493    8425 out.go:345] Setting OutFile to fd 1 ...
	I1209 03:23:44.993647    8425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:23:44.993650    8425 out.go:358] Setting ErrFile to fd 2...
	I1209 03:23:44.993660    8425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 03:23:44.993792    8425 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20068-6536/.minikube/bin
	I1209 03:23:44.995335    8425 out.go:352] Setting JSON to false
	I1209 03:23:45.013714    8425 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4996,"bootTime":1733738429,"procs":554,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 03:23:45.013795    8425 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 03:23:45.018238    8425 out.go:177] * [functional-174000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1209 03:23:45.025193    8425 out.go:177]   - MINIKUBE_LOCATION=20068
	I1209 03:23:45.025260    8425 notify.go:220] Checking for updates...
	I1209 03:23:45.033116    8425 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	I1209 03:23:45.036158    8425 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 03:23:45.039205    8425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 03:23:45.042150    8425 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	I1209 03:23:45.049288    8425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 03:23:45.052513    8425 config.go:182] Loaded profile config "functional-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 03:23:45.052767    8425 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 03:23:45.057125    8425 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1209 03:23:45.064216    8425 start.go:297] selected driver: qemu2
	I1209 03:23:45.064223    8425 start.go:901] validating driver "qemu2" against &{Name:functional-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 03:23:45.064280    8425 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 03:23:45.071177    8425 out.go:201] 
	W1209 03:23:45.075155    8425 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 03:23:45.079121    8425 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.616008459s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-174000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image rm kicbase/echo-server:functional-174000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-174000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 image save --daemon kicbase/echo-server:functional-174000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-174000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
I1209 03:23:10.072094    7820 retry.go:31] will retry after 3.219879554s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:1315: Took "51.038083ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "38.004625ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "52.938625ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "38.620167ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.014378791s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-174000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-174000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-174000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-174000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-275000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-275000 --output=json --user=testUser: (2.07782475s)
--- PASS: TestJSONOutput/stop/Command (2.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-336000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-336000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (102.911083ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"72103188-0337-4a5a-b426-628d145b6748","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-336000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e1e180f-5bb8-41ce-a0ba-b3845e11860c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20068"}}
	{"specversion":"1.0","id":"cec7c8da-f1d8-4ae6-8302-d0023bd0edd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig"}}
	{"specversion":"1.0","id":"b5947abe-4946-41ef-b7b9-780cbd18db01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"cae7a179-48bd-415e-89e9-bfce97c644da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d14ddf94-2ba8-4a18-a1dc-a5eb6b7c44a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube"}}
	{"specversion":"1.0","id":"2b2e7a10-4a00-4f30-ad56-be773568cec6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"72e69e74-3da5-4542-b643-d9b73799bc9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-336000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-336000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-416000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-797000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-797000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (108.018083ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-797000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20068
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20068-6536/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20068-6536/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-797000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-797000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.2595ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-797000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-797000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-797000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-797000: (1.933838417s)
--- PASS: TestNoKubernetes/serial/Stop (1.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-797000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-797000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.420583ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-797000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-797000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-520000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-520000 --alsologtostderr -v=3: (3.340446792s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-520000 -n old-k8s-version-520000: exit status 7 (71.780416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-520000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-467000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-467000 --alsologtostderr -v=3: (3.000489625s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-467000 -n no-preload-467000: exit status 7 (64.550291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-467000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-015000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-015000 --alsologtostderr -v=3: (3.482036666s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-015000 -n embed-certs-015000: exit status 7 (67.59175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-015000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-193000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-193000 --alsologtostderr -v=3: (3.533404958s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-193000 -n default-k8s-diff-port-193000: exit status 7 (69.435042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-193000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-402000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-402000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-402000 --alsologtostderr -v=3: (3.268820792s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-402000 -n newest-cni-402000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-402000 -n newest-cni-402000: exit status 7 (71.417417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-402000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port377251147/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733743390247696000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port377251147/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733743390247696000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port377251147/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733743390247696000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port377251147/001/test-1733743390247696000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (63.228375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:10.311457    7820 retry.go:31] will retry after 677.171237ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.578959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:11.081518    7820 retry.go:31] will retry after 594.308838ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.043167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:11.770201    7820 retry.go:31] will retry after 1.0842508s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.450875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:12.948299    7820 retry.go:31] will retry after 1.24595781s: exit status 83
I1209 03:23:13.294214    7820 retry.go:31] will retry after 3.401127316s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.043166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:14.287700    7820 retry.go:31] will retry after 2.801734122s: exit status 83
I1209 03:23:16.697624    7820 retry.go:31] will retry after 7.020731017s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.405375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:17.182189    7820 retry.go:31] will retry after 4.837300678s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.794417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "sudo umount -f /mount-9p": exit status 83 (47.761208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-174000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port377251147/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port31681321/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (68.4705ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:22.349882    7820 retry.go:31] will retry after 317.182519ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.490542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:22.759910    7820 retry.go:31] will retry after 984.566419ms: exit status 83
I1209 03:23:23.720600    7820 retry.go:31] will retry after 14.241670849s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (93.545041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:23.840295    7820 retry.go:31] will retry after 962.362323ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.852584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:24.895841    7820 retry.go:31] will retry after 1.271408063s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.262167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:26.260924    7820 retry.go:31] will retry after 1.94425849s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.9545ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:28.296448    7820 retry.go:31] will retry after 2.591012166s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.481042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:30.979275    7820 retry.go:31] will retry after 3.162782062s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.338833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "sudo umount -f /mount-9p": exit status 83 (51.619625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-174000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port31681321/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (10.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1031718828/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1031718828/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1031718828/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1: exit status 83 (85.388917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:34.492946    7820 retry.go:31] will retry after 666.300421ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1: exit status 83 (92.238125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:35.253870    7820 retry.go:31] will retry after 773.414021ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1: exit status 83 (94.489708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:36.124220    7820 retry.go:31] will retry after 1.280163868s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1: exit status 83 (91.404042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:37.498079    7820 retry.go:31] will retry after 1.375694059s: exit status 83
I1209 03:23:37.964257    7820 retry.go:31] will retry after 20.987934157s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1: exit status 83 (92.730083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:38.968886    7820 retry.go:31] will retry after 2.009885263s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1: exit status 83 (90.999125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
I1209 03:23:41.072098    7820 retry.go:31] will retry after 3.123506741s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-174000 ssh "findmnt -T" /mount1: exit status 83 (90.945209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-174000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1031718828/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1031718828/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-174000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1031718828/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (10.28s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-078000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-078000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-557000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-557000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-557000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-557000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557000"

                                                
                                                
----------------------- debugLogs end: cilium-557000 [took: 2.485930208s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-557000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-557000
--- SKIP: TestNetworkPlugins/group/cilium (2.60s)

                                                
                                    
Copied to clipboard