Test Report: QEMU_macOS 20083

                    
                      6c4fcf300662436f71bcf8696a35dd22d9fca43a:2024-12-11:37445
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.37
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.1
27 TestAddons/Setup 10.01
28 TestCertOptions 10.18
29 TestCertExpiration 198.78
30 TestDockerFlags 10.04
31 TestForceSystemdFlag 10.11
32 TestForceSystemdEnv 10.28
38 TestErrorSpam/setup 9.78
47 TestFunctional/serial/StartWithProxy 10.17
49 TestFunctional/serial/SoftStart 5.27
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.18
61 TestFunctional/serial/MinikubeKubectlCmd 0.75
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.21
63 TestFunctional/serial/ExtraConfig 5.27
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.08
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.21
73 TestFunctional/parallel/StatusCmd 0.19
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.04
81 TestFunctional/parallel/SSHCmd 0.17
82 TestFunctional/parallel/CpCmd 0.29
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.32
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.09
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 86.49
100 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
101 TestFunctional/parallel/ServiceCmd/List 0.05
102 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
103 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
104 TestFunctional/parallel/ServiceCmd/Format 0.05
105 TestFunctional/parallel/ServiceCmd/URL 0.05
113 TestFunctional/parallel/Version/components 0.05
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
118 TestFunctional/parallel/ImageCommands/ImageBuild 0.13
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.33
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.29
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
127 TestFunctional/parallel/DockerEnv/bash 0.05
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.05
141 TestMultiControlPlane/serial/StartCluster 10.12
142 TestMultiControlPlane/serial/DeployApp 80.82
143 TestMultiControlPlane/serial/PingHostFromPods 0.1
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
147 TestMultiControlPlane/serial/CopyFile 0.07
148 TestMultiControlPlane/serial/StopSecondaryNode 0.12
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
150 TestMultiControlPlane/serial/RestartSecondaryNode 56.49
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 9.04
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.12
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.09
155 TestMultiControlPlane/serial/StopCluster 3.19
156 TestMultiControlPlane/serial/RestartCluster 5.27
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.09
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.09
162 TestImageBuild/serial/Setup 9.99
165 TestJSONOutput/start/Command 9.78
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.22
197 TestMountStart/serial/StartWithMountFirst 10.27
200 TestMultiNode/serial/FreshStart2Nodes 9.99
201 TestMultiNode/serial/DeployApp2Nodes 88.34
202 TestMultiNode/serial/PingHostFrom2Pods 0.1
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.09
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.16
208 TestMultiNode/serial/StartAfterStop 47.88
209 TestMultiNode/serial/RestartKeepsNodes 9.16
210 TestMultiNode/serial/DeleteNode 0.11
211 TestMultiNode/serial/StopMultiNode 4.03
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 20.05
217 TestPreload 10.18
219 TestScheduledStopUnix 9.94
220 TestSkaffold 12.54
223 TestRunningBinaryUpgrade 622.02
225 TestKubernetesUpgrade 19.14
239 TestStoppedBinaryUpgrade/Upgrade 583.73
249 TestPause/serial/Start 9.99
252 TestNoKubernetes/serial/StartWithK8s 10.02
253 TestNoKubernetes/serial/StartWithStopK8s 5.3
254 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2
255 TestNoKubernetes/serial/Start 5.5
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.36
260 TestNoKubernetes/serial/StartNoArgs 7.63
262 TestNetworkPlugins/group/auto/Start 9.84
263 TestNetworkPlugins/group/kindnet/Start 9.97
264 TestNetworkPlugins/group/calico/Start 9.92
265 TestNetworkPlugins/group/custom-flannel/Start 9.92
266 TestNetworkPlugins/group/false/Start 9.91
267 TestNetworkPlugins/group/enable-default-cni/Start 9.99
268 TestNetworkPlugins/group/flannel/Start 9.86
269 TestNetworkPlugins/group/bridge/Start 9.96
270 TestNetworkPlugins/group/kubenet/Start 9.88
272 TestStartStop/group/old-k8s-version/serial/FirstStart 10.06
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.29
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
281 TestStartStop/group/old-k8s-version/serial/Pause 0.12
283 TestStartStop/group/no-preload/serial/FirstStart 10.1
284 TestStartStop/group/no-preload/serial/DeployApp 0.1
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.27
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
292 TestStartStop/group/no-preload/serial/Pause 0.11
294 TestStartStop/group/embed-certs/serial/FirstStart 9.98
295 TestStartStop/group/embed-certs/serial/DeployApp 0.09
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
299 TestStartStop/group/embed-certs/serial/SecondStart 5.26
300 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
301 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
302 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
303 TestStartStop/group/embed-certs/serial/Pause 0.11
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.27
307 TestStartStop/group/newest-cni/serial/FirstStart 10.03
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.15
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
317 TestStartStop/group/newest-cni/serial/SecondStart 5.26
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (23.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-273000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-273000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (23.369923333s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"43b07d87-997b-42f4-b55f-b6a84fff1e0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-273000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"31baec8b-a782-47bf-abab-2ead243ea580","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20083"}}
	{"specversion":"1.0","id":"2b102d0d-3a2d-4160-91f8-f3a94ebd945d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig"}}
	{"specversion":"1.0","id":"b1f98785-3aac-4e44-b1ad-54bf1001d0c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"8d7a50b3-827b-4244-ab0e-0d772de5d793","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"10dbb94c-174b-46be-a8da-a1688eea3b18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube"}}
	{"specversion":"1.0","id":"4ca13462-7d1b-4c96-91ed-a731f8e48a8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"bf228ed8-743c-41ef-8b6c-5166fe327f3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3883a71-aa78-4eb1-a598-92459847c87e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"0849ca73-3f2a-4997-9892-f7893b1ce6f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c09c0273-a048-400d-a879-093a0422240c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-273000\" primary control-plane node in \"download-only-273000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"75567243-9936-4188-b7a8-955622b1ca02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"03e512c3-74a1-41a6-90e6-8f4639ce075d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109798380 0x109798380 0x109798380 0x109798380 0x109798380 0x109798380 0x109798380] Decompressors:map[bz2:0x140007204b0 gz:0x140007204b8 tar:0x14000720450 tar.bz2:0x14000720460 tar.gz:0x14000720480 tar.xz:0x14000720490 tar.zst:0x140007204a0 tbz2:0x14000720460 tgz:0x14
000720480 txz:0x14000720490 tzst:0x140007204a0 xz:0x140007204c0 zip:0x140007204d0 zst:0x140007204c8] Getters:map[file:0x140005a08f0 http:0x140004ae5f0 https:0x140004ae640] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"e131563c-ad0c-4f9a-b11c-a59675dfafcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:21:30.303049    7136 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:21:30.303231    7136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:21:30.303234    7136 out.go:358] Setting ErrFile to fd 2...
	I1211 15:21:30.303237    7136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:21:30.303375    7136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	W1211 15:21:30.303480    7136 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20083-6627/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20083-6627/.minikube/config/config.json: no such file or directory
	I1211 15:21:30.304945    7136 out.go:352] Setting JSON to true
	I1211 15:21:30.322760    7136 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4860,"bootTime":1733954430,"procs":540,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:21:30.322838    7136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:21:30.328810    7136 out.go:97] [download-only-273000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:21:30.328978    7136 notify.go:220] Checking for updates...
	W1211 15:21:30.329047    7136 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball: no such file or directory
	I1211 15:21:30.331804    7136 out.go:169] MINIKUBE_LOCATION=20083
	I1211 15:21:30.333495    7136 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:21:30.338867    7136 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:21:30.342901    7136 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:21:30.346846    7136 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	W1211 15:21:30.352860    7136 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 15:21:30.353118    7136 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:21:30.356778    7136 out.go:97] Using the qemu2 driver based on user configuration
	I1211 15:21:30.356799    7136 start.go:297] selected driver: qemu2
	I1211 15:21:30.356803    7136 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:21:30.356872    7136 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:21:30.359818    7136 out.go:169] Automatically selected the socket_vmnet network
	I1211 15:21:30.366433    7136 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1211 15:21:30.366530    7136 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 15:21:30.366569    7136 cni.go:84] Creating CNI manager for ""
	I1211 15:21:30.366603    7136 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1211 15:21:30.366652    7136 start.go:340] cluster config:
	{Name:download-only-273000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-273000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:21:30.371278    7136 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:21:30.374898    7136 out.go:97] Downloading VM boot image ...
	I1211 15:21:30.374926    7136 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso
	I1211 15:21:39.669827    7136 out.go:97] Starting "download-only-273000" primary control-plane node in "download-only-273000" cluster
	I1211 15:21:39.669866    7136 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1211 15:21:39.724778    7136 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1211 15:21:39.724785    7136 cache.go:56] Caching tarball of preloaded images
	I1211 15:21:39.725043    7136 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1211 15:21:39.732136    7136 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1211 15:21:39.732143    7136 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1211 15:21:39.815808    7136 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1211 15:21:52.307798    7136 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1211 15:21:52.307966    7136 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1211 15:21:53.002496    7136 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1211 15:21:53.002694    7136 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/download-only-273000/config.json ...
	I1211 15:21:53.002710    7136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/download-only-273000/config.json: {Name:mk8d33b5e53b9e4b65834ca6cf10315c93caa2b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:21:53.002986    7136 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1211 15:21:53.003225    7136 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1211 15:21:53.599898    7136 out.go:193] 
	W1211 15:21:53.604976    7136 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109798380 0x109798380 0x109798380 0x109798380 0x109798380 0x109798380 0x109798380] Decompressors:map[bz2:0x140007204b0 gz:0x140007204b8 tar:0x14000720450 tar.bz2:0x14000720460 tar.gz:0x14000720480 tar.xz:0x14000720490 tar.zst:0x140007204a0 tbz2:0x14000720460 tgz:0x14000720480 txz:0x14000720490 tzst:0x140007204a0 xz:0x140007204c0 zip:0x140007204d0 zst:0x140007204c8] Getters:map[file:0x140005a08f0 http:0x140004ae5f0 https:0x140004ae640] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1211 15:21:53.605001    7136 out_reason.go:110] 
	W1211 15:21:53.610876    7136 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:21:53.613954    7136 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-273000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (23.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.1s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-356000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-356000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.943039166s)

                                                
                                                
-- stdout --
	* [offline-docker-356000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-356000" primary control-plane node in "offline-docker-356000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-356000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:32:30.563965    8896 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:32:30.564147    8896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:32:30.564150    8896 out.go:358] Setting ErrFile to fd 2...
	I1211 15:32:30.564152    8896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:32:30.564286    8896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:32:30.565576    8896 out.go:352] Setting JSON to false
	I1211 15:32:30.585158    8896 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5520,"bootTime":1733954430,"procs":535,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:32:30.585265    8896 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:32:30.592447    8896 out.go:177] * [offline-docker-356000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:32:30.595507    8896 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:32:30.600443    8896 notify.go:220] Checking for updates...
	I1211 15:32:30.604433    8896 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:32:30.607482    8896 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:32:30.610401    8896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:32:30.613414    8896 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:32:30.616438    8896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:32:30.619832    8896 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:32:30.619897    8896 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:32:30.623435    8896 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:32:30.630382    8896 start.go:297] selected driver: qemu2
	I1211 15:32:30.630393    8896 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:32:30.630402    8896 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:32:30.632716    8896 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:32:30.636368    8896 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:32:30.639512    8896 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:32:30.639530    8896 cni.go:84] Creating CNI manager for ""
	I1211 15:32:30.639554    8896 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:32:30.639561    8896 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:32:30.639607    8896 start.go:340] cluster config:
	{Name:offline-docker-356000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:32:30.644449    8896 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:32:30.652418    8896 out.go:177] * Starting "offline-docker-356000" primary control-plane node in "offline-docker-356000" cluster
	I1211 15:32:30.655426    8896 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:32:30.655468    8896 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:32:30.655487    8896 cache.go:56] Caching tarball of preloaded images
	I1211 15:32:30.655591    8896 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:32:30.655598    8896 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:32:30.655679    8896 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/offline-docker-356000/config.json ...
	I1211 15:32:30.655690    8896 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/offline-docker-356000/config.json: {Name:mkcff5dbef718e23c73c86070c1c2243a1091e23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:32:30.656075    8896 start.go:360] acquireMachinesLock for offline-docker-356000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:32:30.656126    8896 start.go:364] duration metric: took 38.875µs to acquireMachinesLock for "offline-docker-356000"
	I1211 15:32:30.656136    8896 start.go:93] Provisioning new machine with config: &{Name:offline-docker-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:32:30.656172    8896 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:32:30.663339    8896 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1211 15:32:30.679253    8896 start.go:159] libmachine.API.Create for "offline-docker-356000" (driver="qemu2")
	I1211 15:32:30.679282    8896 client.go:168] LocalClient.Create starting
	I1211 15:32:30.679370    8896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:32:30.679417    8896 main.go:141] libmachine: Decoding PEM data...
	I1211 15:32:30.679435    8896 main.go:141] libmachine: Parsing certificate...
	I1211 15:32:30.679484    8896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:32:30.679513    8896 main.go:141] libmachine: Decoding PEM data...
	I1211 15:32:30.679528    8896 main.go:141] libmachine: Parsing certificate...
	I1211 15:32:30.680050    8896 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:32:30.844083    8896 main.go:141] libmachine: Creating SSH key...
	I1211 15:32:31.024670    8896 main.go:141] libmachine: Creating Disk image...
	I1211 15:32:31.024679    8896 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:32:31.024893    8896 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/disk.qcow2
	I1211 15:32:31.035490    8896 main.go:141] libmachine: STDOUT: 
	I1211 15:32:31.035514    8896 main.go:141] libmachine: STDERR: 
	I1211 15:32:31.035586    8896 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/disk.qcow2 +20000M
	I1211 15:32:31.052577    8896 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:32:31.052607    8896 main.go:141] libmachine: STDERR: 
	I1211 15:32:31.052622    8896 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/disk.qcow2
	I1211 15:32:31.052628    8896 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:32:31.052639    8896 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:32:31.052674    8896 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:df:b5:cd:20:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/disk.qcow2
	I1211 15:32:31.054560    8896 main.go:141] libmachine: STDOUT: 
	I1211 15:32:31.054576    8896 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:32:31.054596    8896 client.go:171] duration metric: took 375.311125ms to LocalClient.Create
	I1211 15:32:33.055506    8896 start.go:128] duration metric: took 2.399392166s to createHost
	I1211 15:32:33.055531    8896 start.go:83] releasing machines lock for "offline-docker-356000", held for 2.399466625s
	W1211 15:32:33.055544    8896 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:32:33.060413    8896 out.go:177] * Deleting "offline-docker-356000" in qemu2 ...
	W1211 15:32:33.076099    8896 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:32:33.076109    8896 start.go:729] Will try again in 5 seconds ...
	I1211 15:32:38.078181    8896 start.go:360] acquireMachinesLock for offline-docker-356000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:32:38.078684    8896 start.go:364] duration metric: took 397.541µs to acquireMachinesLock for "offline-docker-356000"
	I1211 15:32:38.078823    8896 start.go:93] Provisioning new machine with config: &{Name:offline-docker-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:32:38.079104    8896 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:32:38.098141    8896 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1211 15:32:38.147288    8896 start.go:159] libmachine.API.Create for "offline-docker-356000" (driver="qemu2")
	I1211 15:32:38.147340    8896 client.go:168] LocalClient.Create starting
	I1211 15:32:38.147506    8896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:32:38.147588    8896 main.go:141] libmachine: Decoding PEM data...
	I1211 15:32:38.147602    8896 main.go:141] libmachine: Parsing certificate...
	I1211 15:32:38.147675    8896 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:32:38.147736    8896 main.go:141] libmachine: Decoding PEM data...
	I1211 15:32:38.147746    8896 main.go:141] libmachine: Parsing certificate...
	I1211 15:32:38.148401    8896 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:32:38.322326    8896 main.go:141] libmachine: Creating SSH key...
	I1211 15:32:38.401563    8896 main.go:141] libmachine: Creating Disk image...
	I1211 15:32:38.401569    8896 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:32:38.401812    8896 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/disk.qcow2
	I1211 15:32:38.411676    8896 main.go:141] libmachine: STDOUT: 
	I1211 15:32:38.411704    8896 main.go:141] libmachine: STDERR: 
	I1211 15:32:38.411782    8896 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/disk.qcow2 +20000M
	I1211 15:32:38.420145    8896 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:32:38.420160    8896 main.go:141] libmachine: STDERR: 
	I1211 15:32:38.420174    8896 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/disk.qcow2
	I1211 15:32:38.420183    8896 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:32:38.420190    8896 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:32:38.420225    8896 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:ec:f7:fe:aa:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/offline-docker-356000/disk.qcow2
	I1211 15:32:38.422057    8896 main.go:141] libmachine: STDOUT: 
	I1211 15:32:38.422087    8896 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:32:38.422099    8896 client.go:171] duration metric: took 274.761ms to LocalClient.Create
	I1211 15:32:40.424212    8896 start.go:128] duration metric: took 2.345136375s to createHost
	I1211 15:32:40.424264    8896 start.go:83] releasing machines lock for "offline-docker-356000", held for 2.345626708s
	W1211 15:32:40.424608    8896 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-356000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-356000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:32:40.439283    8896 out.go:201] 
	W1211 15:32:40.444371    8896 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:32:40.444426    8896 out.go:270] * 
	* 
	W1211 15:32:40.447372    8896 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:32:40.458232    8896 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-356000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-12-11 15:32:40.474929 -0800 PST m=+670.333497710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-356000 -n offline-docker-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-356000 -n offline-docker-356000: exit status 7 (70.55875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-356000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-356000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-356000
--- FAIL: TestOffline (10.10s)

                                                
                                    
x
+
TestAddons/Setup (10.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-645000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-645000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (10.012686541s)

                                                
                                                
-- stdout --
	* [addons-645000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-645000" primary control-plane node in "addons-645000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:22:06.396380    7217 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:22:06.396553    7217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:22:06.396556    7217 out.go:358] Setting ErrFile to fd 2...
	I1211 15:22:06.396559    7217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:22:06.396674    7217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:22:06.397823    7217 out.go:352] Setting JSON to false
	I1211 15:22:06.415452    7217 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4896,"bootTime":1733954430,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:22:06.415516    7217 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:22:06.419979    7217 out.go:177] * [addons-645000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:22:06.426926    7217 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:22:06.426977    7217 notify.go:220] Checking for updates...
	I1211 15:22:06.433847    7217 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:22:06.436953    7217 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:22:06.439972    7217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:22:06.441251    7217 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:22:06.443969    7217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:22:06.447218    7217 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:22:06.450825    7217 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:22:06.457948    7217 start.go:297] selected driver: qemu2
	I1211 15:22:06.457954    7217 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:22:06.457960    7217 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:22:06.460573    7217 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:22:06.463019    7217 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:22:06.467003    7217 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:22:06.467033    7217 cni.go:84] Creating CNI manager for ""
	I1211 15:22:06.467057    7217 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:22:06.467063    7217 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:22:06.467098    7217 start.go:340] cluster config:
	{Name:addons-645000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:22:06.471772    7217 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:22:06.479911    7217 out.go:177] * Starting "addons-645000" primary control-plane node in "addons-645000" cluster
	I1211 15:22:06.483900    7217 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:22:06.483917    7217 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:22:06.483923    7217 cache.go:56] Caching tarball of preloaded images
	I1211 15:22:06.484005    7217 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:22:06.484011    7217 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:22:06.484239    7217 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/addons-645000/config.json ...
	I1211 15:22:06.484251    7217 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/addons-645000/config.json: {Name:mk1f57e934823e713a61841e1f5c3a330b8b5454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:22:06.484645    7217 start.go:360] acquireMachinesLock for addons-645000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:22:06.484741    7217 start.go:364] duration metric: took 89.333µs to acquireMachinesLock for "addons-645000"
	I1211 15:22:06.484752    7217 start.go:93] Provisioning new machine with config: &{Name:addons-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:22:06.484789    7217 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:22:06.492995    7217 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1211 15:22:06.510636    7217 start.go:159] libmachine.API.Create for "addons-645000" (driver="qemu2")
	I1211 15:22:06.510677    7217 client.go:168] LocalClient.Create starting
	I1211 15:22:06.510843    7217 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:22:06.601060    7217 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:22:06.651134    7217 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:22:06.864107    7217 main.go:141] libmachine: Creating SSH key...
	I1211 15:22:06.920115    7217 main.go:141] libmachine: Creating Disk image...
	I1211 15:22:06.920120    7217 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:22:06.920343    7217 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/disk.qcow2
	I1211 15:22:06.930218    7217 main.go:141] libmachine: STDOUT: 
	I1211 15:22:06.930242    7217 main.go:141] libmachine: STDERR: 
	I1211 15:22:06.930306    7217 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/disk.qcow2 +20000M
	I1211 15:22:06.938860    7217 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:22:06.938882    7217 main.go:141] libmachine: STDERR: 
	I1211 15:22:06.938904    7217 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/disk.qcow2
	I1211 15:22:06.938911    7217 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:22:06.938950    7217 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:22:06.938976    7217 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:14:0b:14:18:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/disk.qcow2
	I1211 15:22:06.940766    7217 main.go:141] libmachine: STDOUT: 
	I1211 15:22:06.940786    7217 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:22:06.940815    7217 client.go:171] duration metric: took 430.12825ms to LocalClient.Create
	I1211 15:22:08.942963    7217 start.go:128] duration metric: took 2.458187708s to createHost
	I1211 15:22:08.943035    7217 start.go:83] releasing machines lock for "addons-645000", held for 2.458315709s
	W1211 15:22:08.943146    7217 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:22:08.959472    7217 out.go:177] * Deleting "addons-645000" in qemu2 ...
	W1211 15:22:08.987649    7217 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:22:08.987671    7217 start.go:729] Will try again in 5 seconds ...
	I1211 15:22:13.989820    7217 start.go:360] acquireMachinesLock for addons-645000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:22:13.990315    7217 start.go:364] duration metric: took 404.417µs to acquireMachinesLock for "addons-645000"
	I1211 15:22:13.990442    7217 start.go:93] Provisioning new machine with config: &{Name:addons-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:22:13.990707    7217 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:22:14.012536    7217 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1211 15:22:14.061464    7217 start.go:159] libmachine.API.Create for "addons-645000" (driver="qemu2")
	I1211 15:22:14.061516    7217 client.go:168] LocalClient.Create starting
	I1211 15:22:14.061655    7217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:22:14.061728    7217 main.go:141] libmachine: Decoding PEM data...
	I1211 15:22:14.061746    7217 main.go:141] libmachine: Parsing certificate...
	I1211 15:22:14.061835    7217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:22:14.061894    7217 main.go:141] libmachine: Decoding PEM data...
	I1211 15:22:14.061936    7217 main.go:141] libmachine: Parsing certificate...
	I1211 15:22:14.062550    7217 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:22:14.234401    7217 main.go:141] libmachine: Creating SSH key...
	I1211 15:22:14.306352    7217 main.go:141] libmachine: Creating Disk image...
	I1211 15:22:14.306358    7217 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:22:14.306585    7217 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/disk.qcow2
	I1211 15:22:14.316536    7217 main.go:141] libmachine: STDOUT: 
	I1211 15:22:14.316557    7217 main.go:141] libmachine: STDERR: 
	I1211 15:22:14.316613    7217 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/disk.qcow2 +20000M
	I1211 15:22:14.325084    7217 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:22:14.325101    7217 main.go:141] libmachine: STDERR: 
	I1211 15:22:14.325115    7217 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/disk.qcow2
	I1211 15:22:14.325121    7217 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:22:14.325138    7217 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:22:14.325169    7217 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:f9:60:56:fe:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/addons-645000/disk.qcow2
	I1211 15:22:14.326954    7217 main.go:141] libmachine: STDOUT: 
	I1211 15:22:14.326969    7217 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:22:14.326982    7217 client.go:171] duration metric: took 265.462833ms to LocalClient.Create
	I1211 15:22:16.329209    7217 start.go:128] duration metric: took 2.338499792s to createHost
	I1211 15:22:16.329271    7217 start.go:83] releasing machines lock for "addons-645000", held for 2.338963458s
	W1211 15:22:16.329642    7217 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:22:16.344330    7217 out.go:201] 
	W1211 15:22:16.349541    7217 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:22:16.349573    7217 out.go:270] * 
	* 
	W1211 15:22:16.352389    7217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:22:16.362365    7217 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-645000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (10.01s)

                                                
                                    
x
+
TestCertOptions (10.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-297000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-297000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.823800417s)

                                                
                                                
-- stdout --
	* [cert-options-297000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-297000" primary control-plane node in "cert-options-297000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-297000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-297000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-297000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-297000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-297000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (90.755542ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-297000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-297000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-297000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-297000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-297000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-297000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (45.226334ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-297000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-297000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-297000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-297000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-297000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-12-11 15:44:08.821214 -0800 PST m=+1358.701019376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-297000 -n cert-options-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-297000 -n cert-options-297000: exit status 7 (33.535583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-297000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-297000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-297000
--- FAIL: TestCertOptions (10.18s)

                                                
                                    
x
+
TestCertExpiration (198.78s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-435000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-435000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.323288s)

                                                
                                                
-- stdout --
	* [cert-expiration-435000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-435000" primary control-plane node in "cert-expiration-435000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-435000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-435000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-435000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-435000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-435000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (6.266319625s)

                                                
                                                
-- stdout --
	* [cert-expiration-435000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-435000" primary control-plane node in "cert-expiration-435000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-435000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-435000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-435000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-435000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-435000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-435000" primary control-plane node in "cert-expiration-435000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-435000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-435000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-435000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-12-11 15:46:59.877666 -0800 PST m=+1529.762749376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-435000 -n cert-expiration-435000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-435000 -n cert-expiration-435000: exit status 7 (73.823958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-435000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-435000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-435000
--- FAIL: TestCertExpiration (198.78s)

                                                
                                    
x
+
TestDockerFlags (10.04s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-877000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-877000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.782616541s)

                                                
                                                
-- stdout --
	* [docker-flags-877000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-877000" primary control-plane node in "docker-flags-877000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-877000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:43:48.755603    9536 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:43:48.755764    9536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:43:48.755767    9536 out.go:358] Setting ErrFile to fd 2...
	I1211 15:43:48.755770    9536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:43:48.755914    9536 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:43:48.757123    9536 out.go:352] Setting JSON to false
	I1211 15:43:48.774777    9536 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6198,"bootTime":1733954430,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:43:48.774867    9536 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:43:48.782108    9536 out.go:177] * [docker-flags-877000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:43:48.789896    9536 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:43:48.789945    9536 notify.go:220] Checking for updates...
	I1211 15:43:48.798872    9536 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:43:48.801852    9536 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:43:48.805857    9536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:43:48.808961    9536 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:43:48.811868    9536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:43:48.815257    9536 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:43:48.815346    9536 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:43:48.815402    9536 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:43:48.819884    9536 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:43:48.826872    9536 start.go:297] selected driver: qemu2
	I1211 15:43:48.826885    9536 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:43:48.826894    9536 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:43:48.829672    9536 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:43:48.833892    9536 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:43:48.836910    9536 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1211 15:43:48.836935    9536 cni.go:84] Creating CNI manager for ""
	I1211 15:43:48.836957    9536 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:43:48.836961    9536 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:43:48.836995    9536 start.go:340] cluster config:
	{Name:docker-flags-877000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:43:48.842080    9536 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:43:48.849918    9536 out.go:177] * Starting "docker-flags-877000" primary control-plane node in "docker-flags-877000" cluster
	I1211 15:43:48.853864    9536 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:43:48.853881    9536 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:43:48.853889    9536 cache.go:56] Caching tarball of preloaded images
	I1211 15:43:48.853975    9536 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:43:48.853981    9536 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:43:48.854054    9536 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/docker-flags-877000/config.json ...
	I1211 15:43:48.854066    9536 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/docker-flags-877000/config.json: {Name:mkdd460c672b58b4de44b3c52caa06e604048fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:43:48.854470    9536 start.go:360] acquireMachinesLock for docker-flags-877000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:43:48.854523    9536 start.go:364] duration metric: took 45.042µs to acquireMachinesLock for "docker-flags-877000"
	I1211 15:43:48.854534    9536 start.go:93] Provisioning new machine with config: &{Name:docker-flags-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:43:48.854570    9536 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:43:48.856630    9536 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1211 15:43:48.874568    9536 start.go:159] libmachine.API.Create for "docker-flags-877000" (driver="qemu2")
	I1211 15:43:48.874593    9536 client.go:168] LocalClient.Create starting
	I1211 15:43:48.874674    9536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:43:48.874722    9536 main.go:141] libmachine: Decoding PEM data...
	I1211 15:43:48.874734    9536 main.go:141] libmachine: Parsing certificate...
	I1211 15:43:48.874771    9536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:43:48.874801    9536 main.go:141] libmachine: Decoding PEM data...
	I1211 15:43:48.874808    9536 main.go:141] libmachine: Parsing certificate...
	I1211 15:43:48.875252    9536 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:43:49.035368    9536 main.go:141] libmachine: Creating SSH key...
	I1211 15:43:49.074719    9536 main.go:141] libmachine: Creating Disk image...
	I1211 15:43:49.074725    9536 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:43:49.074930    9536 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/disk.qcow2
	I1211 15:43:49.084602    9536 main.go:141] libmachine: STDOUT: 
	I1211 15:43:49.084620    9536 main.go:141] libmachine: STDERR: 
	I1211 15:43:49.084689    9536 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/disk.qcow2 +20000M
	I1211 15:43:49.092984    9536 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:43:49.093000    9536 main.go:141] libmachine: STDERR: 
	I1211 15:43:49.093018    9536 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/disk.qcow2
	I1211 15:43:49.093024    9536 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:43:49.093039    9536 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:43:49.093063    9536 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:21:76:0b:46:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/disk.qcow2
	I1211 15:43:49.094844    9536 main.go:141] libmachine: STDOUT: 
	I1211 15:43:49.094872    9536 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:43:49.094890    9536 client.go:171] duration metric: took 220.296583ms to LocalClient.Create
	I1211 15:43:51.096997    9536 start.go:128] duration metric: took 2.242474042s to createHost
	I1211 15:43:51.097055    9536 start.go:83] releasing machines lock for "docker-flags-877000", held for 2.242591125s
	W1211 15:43:51.097119    9536 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:43:51.122324    9536 out.go:177] * Deleting "docker-flags-877000" in qemu2 ...
	W1211 15:43:51.145996    9536 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:43:51.146015    9536 start.go:729] Will try again in 5 seconds ...
	I1211 15:43:56.148188    9536 start.go:360] acquireMachinesLock for docker-flags-877000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:43:56.148796    9536 start.go:364] duration metric: took 506.458µs to acquireMachinesLock for "docker-flags-877000"
	I1211 15:43:56.148904    9536 start.go:93] Provisioning new machine with config: &{Name:docker-flags-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:43:56.149205    9536 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:43:56.153829    9536 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1211 15:43:56.203717    9536 start.go:159] libmachine.API.Create for "docker-flags-877000" (driver="qemu2")
	I1211 15:43:56.203780    9536 client.go:168] LocalClient.Create starting
	I1211 15:43:56.203894    9536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:43:56.203951    9536 main.go:141] libmachine: Decoding PEM data...
	I1211 15:43:56.203969    9536 main.go:141] libmachine: Parsing certificate...
	I1211 15:43:56.204044    9536 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:43:56.204076    9536 main.go:141] libmachine: Decoding PEM data...
	I1211 15:43:56.204090    9536 main.go:141] libmachine: Parsing certificate...
	I1211 15:43:56.204770    9536 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:43:56.376005    9536 main.go:141] libmachine: Creating SSH key...
	I1211 15:43:56.434116    9536 main.go:141] libmachine: Creating Disk image...
	I1211 15:43:56.434121    9536 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:43:56.434341    9536 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/disk.qcow2
	I1211 15:43:56.444462    9536 main.go:141] libmachine: STDOUT: 
	I1211 15:43:56.444486    9536 main.go:141] libmachine: STDERR: 
	I1211 15:43:56.444549    9536 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/disk.qcow2 +20000M
	I1211 15:43:56.452955    9536 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:43:56.452977    9536 main.go:141] libmachine: STDERR: 
	I1211 15:43:56.452988    9536 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/disk.qcow2
	I1211 15:43:56.452992    9536 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:43:56.453004    9536 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:43:56.453037    9536 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:c9:f6:6f:fa:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/docker-flags-877000/disk.qcow2
	I1211 15:43:56.454844    9536 main.go:141] libmachine: STDOUT: 
	I1211 15:43:56.454865    9536 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:43:56.454878    9536 client.go:171] duration metric: took 251.098833ms to LocalClient.Create
	I1211 15:43:58.456986    9536 start.go:128] duration metric: took 2.307810833s to createHost
	I1211 15:43:58.457038    9536 start.go:83] releasing machines lock for "docker-flags-877000", held for 2.308280375s
	W1211 15:43:58.457432    9536 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-877000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-877000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:43:58.474159    9536 out.go:201] 
	W1211 15:43:58.478139    9536 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:43:58.478224    9536 out.go:270] * 
	* 
	W1211 15:43:58.480613    9536 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:43:58.493136    9536 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-877000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-877000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-877000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (84.998291ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-877000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-877000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-877000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-877000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-877000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-877000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-877000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-877000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-877000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (47.952167ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-877000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-877000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-877000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-877000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-877000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-877000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-12-11 15:43:58.641371 -0800 PST m=+1348.520861918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-877000 -n docker-flags-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-877000 -n docker-flags-877000: exit status 7 (33.697917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-877000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-877000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-877000
--- FAIL: TestDockerFlags (10.04s)

                                                
                                    
x
+
TestForceSystemdFlag (10.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-616000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-616000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.910793791s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-616000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-616000" primary control-plane node in "force-systemd-flag-616000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-616000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:43:14.215276    9380 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:43:14.215419    9380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:43:14.215421    9380 out.go:358] Setting ErrFile to fd 2...
	I1211 15:43:14.215424    9380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:43:14.215566    9380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:43:14.216679    9380 out.go:352] Setting JSON to false
	I1211 15:43:14.234254    9380 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6164,"bootTime":1733954430,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:43:14.234328    9380 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:43:14.238053    9380 out.go:177] * [force-systemd-flag-616000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:43:14.245068    9380 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:43:14.245082    9380 notify.go:220] Checking for updates...
	I1211 15:43:14.251027    9380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:43:14.254049    9380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:43:14.257030    9380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:43:14.260008    9380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:43:14.263089    9380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:43:14.266442    9380 config.go:182] Loaded profile config "NoKubernetes-237000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:43:14.266526    9380 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:43:14.266580    9380 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:43:14.271035    9380 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:43:14.276978    9380 start.go:297] selected driver: qemu2
	I1211 15:43:14.276985    9380 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:43:14.276993    9380 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:43:14.279669    9380 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:43:14.284007    9380 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:43:14.287140    9380 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 15:43:14.287157    9380 cni.go:84] Creating CNI manager for ""
	I1211 15:43:14.287196    9380 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:43:14.287201    9380 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:43:14.287236    9380 start.go:340] cluster config:
	{Name:force-systemd-flag-616000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:43:14.292121    9380 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:43:14.300095    9380 out.go:177] * Starting "force-systemd-flag-616000" primary control-plane node in "force-systemd-flag-616000" cluster
	I1211 15:43:14.303918    9380 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:43:14.303937    9380 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:43:14.303946    9380 cache.go:56] Caching tarball of preloaded images
	I1211 15:43:14.304033    9380 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:43:14.304046    9380 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:43:14.304135    9380 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/force-systemd-flag-616000/config.json ...
	I1211 15:43:14.304147    9380 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/force-systemd-flag-616000/config.json: {Name:mk6fb55b2dfb7322370638af70595d93d0e5c879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:43:14.304692    9380 start.go:360] acquireMachinesLock for force-systemd-flag-616000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:43:14.304761    9380 start.go:364] duration metric: took 61.708µs to acquireMachinesLock for "force-systemd-flag-616000"
	I1211 15:43:14.304772    9380 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:43:14.304819    9380 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:43:14.312030    9380 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1211 15:43:14.329804    9380 start.go:159] libmachine.API.Create for "force-systemd-flag-616000" (driver="qemu2")
	I1211 15:43:14.329839    9380 client.go:168] LocalClient.Create starting
	I1211 15:43:14.329915    9380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:43:14.329953    9380 main.go:141] libmachine: Decoding PEM data...
	I1211 15:43:14.329966    9380 main.go:141] libmachine: Parsing certificate...
	I1211 15:43:14.330015    9380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:43:14.330045    9380 main.go:141] libmachine: Decoding PEM data...
	I1211 15:43:14.330053    9380 main.go:141] libmachine: Parsing certificate...
	I1211 15:43:14.330525    9380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:43:14.505393    9380 main.go:141] libmachine: Creating SSH key...
	I1211 15:43:14.572517    9380 main.go:141] libmachine: Creating Disk image...
	I1211 15:43:14.572528    9380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:43:14.572742    9380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/disk.qcow2
	I1211 15:43:14.582448    9380 main.go:141] libmachine: STDOUT: 
	I1211 15:43:14.582467    9380 main.go:141] libmachine: STDERR: 
	I1211 15:43:14.582540    9380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/disk.qcow2 +20000M
	I1211 15:43:14.591131    9380 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:43:14.591150    9380 main.go:141] libmachine: STDERR: 
	I1211 15:43:14.591162    9380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/disk.qcow2
	I1211 15:43:14.591168    9380 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:43:14.591179    9380 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:43:14.591214    9380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:93:fa:e5:92:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/disk.qcow2
	I1211 15:43:14.593037    9380 main.go:141] libmachine: STDOUT: 
	I1211 15:43:14.593052    9380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:43:14.593074    9380 client.go:171] duration metric: took 263.235667ms to LocalClient.Create
	I1211 15:43:16.595176    9380 start.go:128] duration metric: took 2.290406167s to createHost
	I1211 15:43:16.595248    9380 start.go:83] releasing machines lock for "force-systemd-flag-616000", held for 2.290546917s
	W1211 15:43:16.595335    9380 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:43:16.610508    9380 out.go:177] * Deleting "force-systemd-flag-616000" in qemu2 ...
	W1211 15:43:16.643096    9380 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:43:16.643132    9380 start.go:729] Will try again in 5 seconds ...
	I1211 15:43:21.645138    9380 start.go:360] acquireMachinesLock for force-systemd-flag-616000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:43:21.645538    9380 start.go:364] duration metric: took 323.166µs to acquireMachinesLock for "force-systemd-flag-616000"
	I1211 15:43:21.645661    9380 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:43:21.645921    9380 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:43:21.651544    9380 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1211 15:43:21.699473    9380 start.go:159] libmachine.API.Create for "force-systemd-flag-616000" (driver="qemu2")
	I1211 15:43:21.699527    9380 client.go:168] LocalClient.Create starting
	I1211 15:43:21.699629    9380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:43:21.699689    9380 main.go:141] libmachine: Decoding PEM data...
	I1211 15:43:21.699706    9380 main.go:141] libmachine: Parsing certificate...
	I1211 15:43:21.699782    9380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:43:21.699822    9380 main.go:141] libmachine: Decoding PEM data...
	I1211 15:43:21.699832    9380 main.go:141] libmachine: Parsing certificate...
	I1211 15:43:21.700508    9380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:43:21.891515    9380 main.go:141] libmachine: Creating SSH key...
	I1211 15:43:22.020448    9380 main.go:141] libmachine: Creating Disk image...
	I1211 15:43:22.020454    9380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:43:22.020652    9380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/disk.qcow2
	I1211 15:43:22.030893    9380 main.go:141] libmachine: STDOUT: 
	I1211 15:43:22.030917    9380 main.go:141] libmachine: STDERR: 
	I1211 15:43:22.030981    9380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/disk.qcow2 +20000M
	I1211 15:43:22.039622    9380 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:43:22.039637    9380 main.go:141] libmachine: STDERR: 
	I1211 15:43:22.039649    9380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/disk.qcow2
	I1211 15:43:22.039654    9380 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:43:22.039663    9380 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:43:22.039702    9380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:d1:52:3b:cd:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-flag-616000/disk.qcow2
	I1211 15:43:22.041469    9380 main.go:141] libmachine: STDOUT: 
	I1211 15:43:22.041483    9380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:43:22.041496    9380 client.go:171] duration metric: took 341.973375ms to LocalClient.Create
	I1211 15:43:24.043607    9380 start.go:128] duration metric: took 2.397724583s to createHost
	I1211 15:43:24.043680    9380 start.go:83] releasing machines lock for "force-systemd-flag-616000", held for 2.398192292s
	W1211 15:43:24.044099    9380 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:43:24.057674    9380 out.go:201] 
	W1211 15:43:24.061854    9380 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:43:24.061890    9380 out.go:270] * 
	* 
	W1211 15:43:24.064494    9380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:43:24.077628    9380 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-616000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-616000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-616000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.690625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-616000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-616000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-616000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-12-11 15:43:24.17548 -0800 PST m=+1314.053907293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-616000 -n force-systemd-flag-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-616000 -n force-systemd-flag-616000: exit status 7 (36.062125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-616000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-616000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-616000
--- FAIL: TestForceSystemdFlag (10.11s)

                                                
                                    
x
+
TestForceSystemdEnv (10.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-295000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-295000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.076802291s)

                                                
                                                
-- stdout --
	* [force-systemd-env-295000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-295000" primary control-plane node in "force-systemd-env-295000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-295000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:43:38.475488    9497 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:43:38.475636    9497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:43:38.475640    9497 out.go:358] Setting ErrFile to fd 2...
	I1211 15:43:38.475642    9497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:43:38.475778    9497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:43:38.476948    9497 out.go:352] Setting JSON to false
	I1211 15:43:38.494395    9497 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6188,"bootTime":1733954430,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:43:38.494480    9497 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:43:38.501546    9497 out.go:177] * [force-systemd-env-295000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:43:38.509465    9497 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:43:38.509518    9497 notify.go:220] Checking for updates...
	I1211 15:43:38.519486    9497 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:43:38.523478    9497 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:43:38.526482    9497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:43:38.529483    9497 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:43:38.532415    9497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1211 15:43:38.535789    9497 config.go:182] Loaded profile config "NoKubernetes-237000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1211 15:43:38.535872    9497 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:43:38.535914    9497 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:43:38.540495    9497 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:43:38.547480    9497 start.go:297] selected driver: qemu2
	I1211 15:43:38.547485    9497 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:43:38.547490    9497 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:43:38.550078    9497 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:43:38.552500    9497 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:43:38.556494    9497 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 15:43:38.556507    9497 cni.go:84] Creating CNI manager for ""
	I1211 15:43:38.556528    9497 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:43:38.556532    9497 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:43:38.556561    9497 start.go:340] cluster config:
	{Name:force-systemd-env-295000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:43:38.561085    9497 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:43:38.565461    9497 out.go:177] * Starting "force-systemd-env-295000" primary control-plane node in "force-systemd-env-295000" cluster
	I1211 15:43:38.572486    9497 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:43:38.572509    9497 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:43:38.572519    9497 cache.go:56] Caching tarball of preloaded images
	I1211 15:43:38.572611    9497 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:43:38.572617    9497 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:43:38.572721    9497 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/force-systemd-env-295000/config.json ...
	I1211 15:43:38.572733    9497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/force-systemd-env-295000/config.json: {Name:mkc02d92b0130d0fd6c0c151c63b2b6bfbddadc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:43:38.573024    9497 start.go:360] acquireMachinesLock for force-systemd-env-295000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:43:38.573081    9497 start.go:364] duration metric: took 47.333µs to acquireMachinesLock for "force-systemd-env-295000"
	I1211 15:43:38.573093    9497 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:43:38.573129    9497 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:43:38.577462    9497 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1211 15:43:38.595287    9497 start.go:159] libmachine.API.Create for "force-systemd-env-295000" (driver="qemu2")
	I1211 15:43:38.595317    9497 client.go:168] LocalClient.Create starting
	I1211 15:43:38.595397    9497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:43:38.595438    9497 main.go:141] libmachine: Decoding PEM data...
	I1211 15:43:38.595452    9497 main.go:141] libmachine: Parsing certificate...
	I1211 15:43:38.595494    9497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:43:38.595526    9497 main.go:141] libmachine: Decoding PEM data...
	I1211 15:43:38.595533    9497 main.go:141] libmachine: Parsing certificate...
	I1211 15:43:38.595982    9497 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:43:38.765275    9497 main.go:141] libmachine: Creating SSH key...
	I1211 15:43:38.978050    9497 main.go:141] libmachine: Creating Disk image...
	I1211 15:43:38.978062    9497 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:43:38.978352    9497 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/disk.qcow2
	I1211 15:43:38.988842    9497 main.go:141] libmachine: STDOUT: 
	I1211 15:43:38.988859    9497 main.go:141] libmachine: STDERR: 
	I1211 15:43:38.988934    9497 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/disk.qcow2 +20000M
	I1211 15:43:38.997618    9497 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:43:38.997635    9497 main.go:141] libmachine: STDERR: 
	I1211 15:43:38.997649    9497 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/disk.qcow2
	I1211 15:43:38.997653    9497 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:43:38.997664    9497 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:43:38.997694    9497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:64:d0:9d:91:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/disk.qcow2
	I1211 15:43:38.999522    9497 main.go:141] libmachine: STDOUT: 
	I1211 15:43:38.999545    9497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:43:38.999568    9497 client.go:171] duration metric: took 404.253958ms to LocalClient.Create
	I1211 15:43:41.001677    9497 start.go:128] duration metric: took 2.428603292s to createHost
	I1211 15:43:41.001749    9497 start.go:83] releasing machines lock for "force-systemd-env-295000", held for 2.428732584s
	W1211 15:43:41.001799    9497 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:43:41.022425    9497 out.go:177] * Deleting "force-systemd-env-295000" in qemu2 ...
	W1211 15:43:41.084440    9497 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:43:41.084476    9497 start.go:729] Will try again in 5 seconds ...
	I1211 15:43:46.086530    9497 start.go:360] acquireMachinesLock for force-systemd-env-295000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:43:46.087147    9497 start.go:364] duration metric: took 528.75µs to acquireMachinesLock for "force-systemd-env-295000"
	I1211 15:43:46.087299    9497 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:43:46.087540    9497 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:43:46.093251    9497 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1211 15:43:46.142474    9497 start.go:159] libmachine.API.Create for "force-systemd-env-295000" (driver="qemu2")
	I1211 15:43:46.142526    9497 client.go:168] LocalClient.Create starting
	I1211 15:43:46.142647    9497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:43:46.142732    9497 main.go:141] libmachine: Decoding PEM data...
	I1211 15:43:46.142750    9497 main.go:141] libmachine: Parsing certificate...
	I1211 15:43:46.142815    9497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:43:46.142875    9497 main.go:141] libmachine: Decoding PEM data...
	I1211 15:43:46.142894    9497 main.go:141] libmachine: Parsing certificate...
	I1211 15:43:46.143486    9497 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:43:46.341490    9497 main.go:141] libmachine: Creating SSH key...
	I1211 15:43:46.449935    9497 main.go:141] libmachine: Creating Disk image...
	I1211 15:43:46.449941    9497 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:43:46.450123    9497 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/disk.qcow2
	I1211 15:43:46.460261    9497 main.go:141] libmachine: STDOUT: 
	I1211 15:43:46.460281    9497 main.go:141] libmachine: STDERR: 
	I1211 15:43:46.460336    9497 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/disk.qcow2 +20000M
	I1211 15:43:46.468737    9497 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:43:46.468754    9497 main.go:141] libmachine: STDERR: 
	I1211 15:43:46.468767    9497 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/disk.qcow2
	I1211 15:43:46.468774    9497 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:43:46.468781    9497 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:43:46.468833    9497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:90:17:46:39:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/force-systemd-env-295000/disk.qcow2
	I1211 15:43:46.470645    9497 main.go:141] libmachine: STDOUT: 
	I1211 15:43:46.470660    9497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:43:46.470673    9497 client.go:171] duration metric: took 328.151959ms to LocalClient.Create
	I1211 15:43:48.472764    9497 start.go:128] duration metric: took 2.385266084s to createHost
	I1211 15:43:48.472822    9497 start.go:83] releasing machines lock for "force-systemd-env-295000", held for 2.385726125s
	W1211 15:43:48.473229    9497 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:43:48.483896    9497 out.go:201] 
	W1211 15:43:48.492065    9497 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:43:48.492090    9497 out.go:270] * 
	* 
	W1211 15:43:48.494850    9497 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:43:48.504876    9497 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-295000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-295000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-295000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (82.906417ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-295000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-295000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-295000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-12-11 15:43:48.605988 -0800 PST m=+1338.485169626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-295000 -n force-systemd-env-295000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-295000 -n force-systemd-env-295000: exit status 7 (36.5295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-295000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-295000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-295000
--- FAIL: TestForceSystemdEnv (10.28s)

                                                
                                    
x
+
TestErrorSpam/setup (9.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-911000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-911000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 --driver=qemu2 : exit status 80 (9.782712458s)

                                                
                                                
-- stdout --
	* [nospam-911000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-911000" primary control-plane node in "nospam-911000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-911000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-911000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-911000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-911000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-911000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=20083
- KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-911000" primary control-plane node in "nospam-911000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-911000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-911000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.78s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-749000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-749000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (10.087293583s)

                                                
                                                
-- stdout --
	* [functional-749000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-749000" primary control-plane node in "functional-749000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-749000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:61238 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:61238 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:61238 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-749000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-749000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-749000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=20083
- KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-749000" primary control-plane node in "functional-749000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-749000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:61238 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:61238 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:61238 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-749000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (78.337792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.17s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1211 15:22:44.641867    7135 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-749000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-749000 --alsologtostderr -v=8: exit status 80 (5.19312125s)

                                                
                                                
-- stdout --
	* [functional-749000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-749000" primary control-plane node in "functional-749000" cluster
	* Restarting existing qemu2 VM for "functional-749000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-749000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:22:44.675381    7357 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:22:44.675570    7357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:22:44.675573    7357 out.go:358] Setting ErrFile to fd 2...
	I1211 15:22:44.675576    7357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:22:44.675708    7357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:22:44.676777    7357 out.go:352] Setting JSON to false
	I1211 15:22:44.694340    7357 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4934,"bootTime":1733954430,"procs":529,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:22:44.694409    7357 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:22:44.699657    7357 out.go:177] * [functional-749000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:22:44.705024    7357 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:22:44.705072    7357 notify.go:220] Checking for updates...
	I1211 15:22:44.713574    7357 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:22:44.717502    7357 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:22:44.721565    7357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:22:44.724641    7357 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:22:44.727625    7357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:22:44.730856    7357 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:22:44.730916    7357 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:22:44.735644    7357 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:22:44.742526    7357 start.go:297] selected driver: qemu2
	I1211 15:22:44.742530    7357 start.go:901] validating driver "qemu2" against &{Name:functional-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:22:44.742572    7357 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:22:44.745306    7357 cni.go:84] Creating CNI manager for ""
	I1211 15:22:44.745347    7357 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:22:44.745408    7357 start.go:340] cluster config:
	{Name:functional-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-749000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:22:44.749963    7357 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:22:44.757561    7357 out.go:177] * Starting "functional-749000" primary control-plane node in "functional-749000" cluster
	I1211 15:22:44.761571    7357 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:22:44.761589    7357 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:22:44.761597    7357 cache.go:56] Caching tarball of preloaded images
	I1211 15:22:44.761692    7357 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:22:44.761698    7357 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:22:44.761753    7357 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/functional-749000/config.json ...
	I1211 15:22:44.762240    7357 start.go:360] acquireMachinesLock for functional-749000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:22:44.762271    7357 start.go:364] duration metric: took 24.333µs to acquireMachinesLock for "functional-749000"
	I1211 15:22:44.762282    7357 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:22:44.762293    7357 fix.go:54] fixHost starting: 
	I1211 15:22:44.762414    7357 fix.go:112] recreateIfNeeded on functional-749000: state=Stopped err=<nil>
	W1211 15:22:44.762425    7357 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:22:44.770612    7357 out.go:177] * Restarting existing qemu2 VM for "functional-749000" ...
	I1211 15:22:44.774466    7357 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:22:44.774504    7357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d4:e5:cb:9b:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/disk.qcow2
	I1211 15:22:44.776771    7357 main.go:141] libmachine: STDOUT: 
	I1211 15:22:44.776792    7357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:22:44.776825    7357 fix.go:56] duration metric: took 14.535083ms for fixHost
	I1211 15:22:44.776829    7357 start.go:83] releasing machines lock for "functional-749000", held for 14.5535ms
	W1211 15:22:44.776844    7357 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:22:44.776896    7357 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:22:44.776902    7357 start.go:729] Will try again in 5 seconds ...
	I1211 15:22:49.779137    7357 start.go:360] acquireMachinesLock for functional-749000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:22:49.779641    7357 start.go:364] duration metric: took 364.958µs to acquireMachinesLock for "functional-749000"
	I1211 15:22:49.779778    7357 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:22:49.779796    7357 fix.go:54] fixHost starting: 
	I1211 15:22:49.780557    7357 fix.go:112] recreateIfNeeded on functional-749000: state=Stopped err=<nil>
	W1211 15:22:49.780584    7357 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:22:49.789076    7357 out.go:177] * Restarting existing qemu2 VM for "functional-749000" ...
	I1211 15:22:49.792107    7357 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:22:49.792349    7357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d4:e5:cb:9b:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/disk.qcow2
	I1211 15:22:49.802419    7357 main.go:141] libmachine: STDOUT: 
	I1211 15:22:49.802478    7357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:22:49.802561    7357 fix.go:56] duration metric: took 22.7635ms for fixHost
	I1211 15:22:49.802579    7357 start.go:83] releasing machines lock for "functional-749000", held for 22.917ms
	W1211 15:22:49.802753    7357 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-749000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-749000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:22:49.808933    7357 out.go:201] 
	W1211 15:22:49.813093    7357 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:22:49.813154    7357 out.go:270] * 
	* 
	W1211 15:22:49.815584    7357 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:22:49.823067    7357 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-749000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.194594791s for "functional-749000" cluster.
I1211 15:22:49.836552    7135 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (70.702834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (28.591834ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-749000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (35.0325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-749000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-749000 get po -A: exit status 1 (27.248209ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-749000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-749000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-749000\n"*: args "kubectl --context functional-749000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-749000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (34.710917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh sudo crictl images: exit status 83 (47.828416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-749000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (45.921209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-749000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (44.944125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (45.958667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-749000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 kubectl -- --context functional-749000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 kubectl -- --context functional-749000 get pods: exit status 1 (714.320791ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-749000
	* no server found for cluster "functional-749000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-749000 kubectl -- --context functional-749000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (35.818084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-749000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-749000 get pods: exit status 1 (1.178391084s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-749000
	* no server found for cluster "functional-749000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-749000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (33.48375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.21s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-749000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-749000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.191341125s)

                                                
                                                
-- stdout --
	* [functional-749000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-749000" primary control-plane node in "functional-749000" cluster
	* Restarting existing qemu2 VM for "functional-749000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-749000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-749000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-749000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.191859708s for "functional-749000" cluster.
I1211 15:23:00.649976    7135 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (74.582458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-749000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-749000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.722459ms)

                                                
                                                
** stderr ** 
	error: context "functional-749000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-749000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (35.058541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 logs: exit status 83 (82.631584ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-273000 | jenkins | v1.34.0 | 11 Dec 24 15:21 PST |                     |
	|         | -p download-only-273000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:21 PST | 11 Dec 24 15:21 PST |
	| delete  | -p download-only-273000                                                  | download-only-273000 | jenkins | v1.34.0 | 11 Dec 24 15:21 PST | 11 Dec 24 15:21 PST |
	| start   | -o=json --download-only                                                  | download-only-352000 | jenkins | v1.34.0 | 11 Dec 24 15:21 PST |                     |
	|         | -p download-only-352000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	| delete  | -p download-only-352000                                                  | download-only-352000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	| delete  | -p download-only-273000                                                  | download-only-273000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	| delete  | -p download-only-352000                                                  | download-only-352000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	| start   | --download-only -p                                                       | binary-mirror-893000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | binary-mirror-893000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:61210                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-893000                                                  | binary-mirror-893000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	| addons  | enable dashboard -p                                                      | addons-645000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | addons-645000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-645000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | addons-645000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-645000 --wait=true                                             | addons-645000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	| delete  | -p addons-645000                                                         | addons-645000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	| start   | -p nospam-911000 -n=1 --memory=2250 --wait=false                         | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-911000                                                         | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	| start   | -p functional-749000                                                     | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-749000                                                     | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-749000 cache add                                              | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-749000 cache add                                              | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-749000 cache add                                              | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-749000 cache add                                              | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	|         | minikube-local-cache-test:functional-749000                              |                      |         |         |                     |                     |
	| cache   | functional-749000 cache delete                                           | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	|         | minikube-local-cache-test:functional-749000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	| ssh     | functional-749000 ssh sudo                                               | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-749000                                                        | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-749000 ssh                                                    | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-749000 cache reload                                           | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	| ssh     | functional-749000 ssh                                                    | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-749000 kubectl --                                             | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | --context functional-749000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-749000                                                     | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 15:22:55
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 15:22:55.488976    7437 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:22:55.489121    7437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:22:55.489123    7437 out.go:358] Setting ErrFile to fd 2...
	I1211 15:22:55.489125    7437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:22:55.489237    7437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:22:55.490390    7437 out.go:352] Setting JSON to false
	I1211 15:22:55.507996    7437 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4945,"bootTime":1733954430,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:22:55.508067    7437 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:22:55.513321    7437 out.go:177] * [functional-749000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:22:55.520373    7437 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:22:55.520435    7437 notify.go:220] Checking for updates...
	I1211 15:22:55.528327    7437 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:22:55.531351    7437 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:22:55.534403    7437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:22:55.537376    7437 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:22:55.540476    7437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:22:55.543680    7437 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:22:55.543728    7437 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:22:55.547236    7437 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:22:55.554331    7437 start.go:297] selected driver: qemu2
	I1211 15:22:55.554333    7437 start.go:901] validating driver "qemu2" against &{Name:functional-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:22:55.554374    7437 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:22:55.556927    7437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:22:55.556947    7437 cni.go:84] Creating CNI manager for ""
	I1211 15:22:55.556980    7437 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:22:55.557037    7437 start.go:340] cluster config:
	{Name:functional-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-749000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:22:55.561624    7437 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:22:55.570342    7437 out.go:177] * Starting "functional-749000" primary control-plane node in "functional-749000" cluster
	I1211 15:22:55.574419    7437 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:22:55.574433    7437 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:22:55.574440    7437 cache.go:56] Caching tarball of preloaded images
	I1211 15:22:55.574516    7437 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:22:55.574519    7437 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:22:55.574576    7437 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/functional-749000/config.json ...
	I1211 15:22:55.575129    7437 start.go:360] acquireMachinesLock for functional-749000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:22:55.575172    7437 start.go:364] duration metric: took 39.208µs to acquireMachinesLock for "functional-749000"
	I1211 15:22:55.575179    7437 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:22:55.575181    7437 fix.go:54] fixHost starting: 
	I1211 15:22:55.575300    7437 fix.go:112] recreateIfNeeded on functional-749000: state=Stopped err=<nil>
	W1211 15:22:55.575306    7437 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:22:55.582404    7437 out.go:177] * Restarting existing qemu2 VM for "functional-749000" ...
	I1211 15:22:55.586319    7437 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:22:55.586358    7437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d4:e5:cb:9b:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/disk.qcow2
	I1211 15:22:55.588619    7437 main.go:141] libmachine: STDOUT: 
	I1211 15:22:55.588631    7437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:22:55.588661    7437 fix.go:56] duration metric: took 13.477792ms for fixHost
	I1211 15:22:55.588665    7437 start.go:83] releasing machines lock for "functional-749000", held for 13.490333ms
	W1211 15:22:55.588670    7437 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:22:55.588700    7437 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:22:55.588705    7437 start.go:729] Will try again in 5 seconds ...
	I1211 15:23:00.590785    7437 start.go:360] acquireMachinesLock for functional-749000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:23:00.591155    7437 start.go:364] duration metric: took 320.458µs to acquireMachinesLock for "functional-749000"
	I1211 15:23:00.591370    7437 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:23:00.591383    7437 fix.go:54] fixHost starting: 
	I1211 15:23:00.592094    7437 fix.go:112] recreateIfNeeded on functional-749000: state=Stopped err=<nil>
	W1211 15:23:00.592136    7437 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:23:00.596512    7437 out.go:177] * Restarting existing qemu2 VM for "functional-749000" ...
	I1211 15:23:00.604478    7437 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:23:00.604653    7437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d4:e5:cb:9b:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/disk.qcow2
	I1211 15:23:00.615186    7437 main.go:141] libmachine: STDOUT: 
	I1211 15:23:00.615245    7437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:23:00.615332    7437 fix.go:56] duration metric: took 23.95275ms for fixHost
	I1211 15:23:00.615348    7437 start.go:83] releasing machines lock for "functional-749000", held for 24.179584ms
	W1211 15:23:00.615558    7437 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-749000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:23:00.622389    7437 out.go:201] 
	W1211 15:23:00.626492    7437 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:23:00.626509    7437 out.go:270] * 
	W1211 15:23:00.629052    7437 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:23:00.636581    7437 out.go:201] 
	
	
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-749000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-273000 | jenkins | v1.34.0 | 11 Dec 24 15:21 PST |                     |
|         | -p download-only-273000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:21 PST | 11 Dec 24 15:21 PST |
| delete  | -p download-only-273000                                                  | download-only-273000 | jenkins | v1.34.0 | 11 Dec 24 15:21 PST | 11 Dec 24 15:21 PST |
| start   | -o=json --download-only                                                  | download-only-352000 | jenkins | v1.34.0 | 11 Dec 24 15:21 PST |                     |
|         | -p download-only-352000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| delete  | -p download-only-352000                                                  | download-only-352000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| delete  | -p download-only-273000                                                  | download-only-273000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| delete  | -p download-only-352000                                                  | download-only-352000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| start   | --download-only -p                                                       | binary-mirror-893000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | binary-mirror-893000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:61210                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-893000                                                  | binary-mirror-893000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| addons  | enable dashboard -p                                                      | addons-645000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | addons-645000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-645000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | addons-645000                                                            |                      |         |         |                     |                     |
| start   | -p addons-645000 --wait=true                                             | addons-645000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-645000                                                         | addons-645000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| start   | -p nospam-911000 -n=1 --memory=2250 --wait=false                         | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-911000                                                         | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| start   | -p functional-749000                                                     | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-749000                                                     | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-749000 cache add                                              | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-749000 cache add                                              | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-749000 cache add                                              | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-749000 cache add                                              | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | minikube-local-cache-test:functional-749000                              |                      |         |         |                     |                     |
| cache   | functional-749000 cache delete                                           | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | minikube-local-cache-test:functional-749000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| ssh     | functional-749000 ssh sudo                                               | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-749000                                                        | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-749000 ssh                                                    | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-749000 cache reload                                           | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| ssh     | functional-749000 ssh                                                    | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-749000 kubectl --                                             | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | --context functional-749000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-749000                                                     | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/12/11 15:22:55
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1211 15:22:55.488976    7437 out.go:345] Setting OutFile to fd 1 ...
I1211 15:22:55.489121    7437 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:22:55.489123    7437 out.go:358] Setting ErrFile to fd 2...
I1211 15:22:55.489125    7437 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:22:55.489237    7437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
I1211 15:22:55.490390    7437 out.go:352] Setting JSON to false
I1211 15:22:55.507996    7437 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4945,"bootTime":1733954430,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1211 15:22:55.508067    7437 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1211 15:22:55.513321    7437 out.go:177] * [functional-749000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1211 15:22:55.520373    7437 out.go:177]   - MINIKUBE_LOCATION=20083
I1211 15:22:55.520435    7437 notify.go:220] Checking for updates...
I1211 15:22:55.528327    7437 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
I1211 15:22:55.531351    7437 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1211 15:22:55.534403    7437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1211 15:22:55.537376    7437 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
I1211 15:22:55.540476    7437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1211 15:22:55.543680    7437 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1211 15:22:55.543728    7437 driver.go:394] Setting default libvirt URI to qemu:///system
I1211 15:22:55.547236    7437 out.go:177] * Using the qemu2 driver based on existing profile
I1211 15:22:55.554331    7437 start.go:297] selected driver: qemu2
I1211 15:22:55.554333    7437 start.go:901] validating driver "qemu2" against &{Name:functional-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1211 15:22:55.554374    7437 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1211 15:22:55.556927    7437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1211 15:22:55.556947    7437 cni.go:84] Creating CNI manager for ""
I1211 15:22:55.556980    7437 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1211 15:22:55.557037    7437 start.go:340] cluster config:
{Name:functional-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-749000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1211 15:22:55.561624    7437 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1211 15:22:55.570342    7437 out.go:177] * Starting "functional-749000" primary control-plane node in "functional-749000" cluster
I1211 15:22:55.574419    7437 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1211 15:22:55.574433    7437 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1211 15:22:55.574440    7437 cache.go:56] Caching tarball of preloaded images
I1211 15:22:55.574516    7437 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1211 15:22:55.574519    7437 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1211 15:22:55.574576    7437 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/functional-749000/config.json ...
I1211 15:22:55.575129    7437 start.go:360] acquireMachinesLock for functional-749000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1211 15:22:55.575172    7437 start.go:364] duration metric: took 39.208µs to acquireMachinesLock for "functional-749000"
I1211 15:22:55.575179    7437 start.go:96] Skipping create...Using existing machine configuration
I1211 15:22:55.575181    7437 fix.go:54] fixHost starting: 
I1211 15:22:55.575300    7437 fix.go:112] recreateIfNeeded on functional-749000: state=Stopped err=<nil>
W1211 15:22:55.575306    7437 fix.go:138] unexpected machine state, will restart: <nil>
I1211 15:22:55.582404    7437 out.go:177] * Restarting existing qemu2 VM for "functional-749000" ...
I1211 15:22:55.586319    7437 qemu.go:418] Using hvf for hardware acceleration
I1211 15:22:55.586358    7437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d4:e5:cb:9b:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/disk.qcow2
I1211 15:22:55.588619    7437 main.go:141] libmachine: STDOUT: 
I1211 15:22:55.588631    7437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1211 15:22:55.588661    7437 fix.go:56] duration metric: took 13.477792ms for fixHost
I1211 15:22:55.588665    7437 start.go:83] releasing machines lock for "functional-749000", held for 13.490333ms
W1211 15:22:55.588670    7437 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1211 15:22:55.588700    7437 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1211 15:22:55.588705    7437 start.go:729] Will try again in 5 seconds ...
I1211 15:23:00.590785    7437 start.go:360] acquireMachinesLock for functional-749000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1211 15:23:00.591155    7437 start.go:364] duration metric: took 320.458µs to acquireMachinesLock for "functional-749000"
I1211 15:23:00.591370    7437 start.go:96] Skipping create...Using existing machine configuration
I1211 15:23:00.591383    7437 fix.go:54] fixHost starting: 
I1211 15:23:00.592094    7437 fix.go:112] recreateIfNeeded on functional-749000: state=Stopped err=<nil>
W1211 15:23:00.592136    7437 fix.go:138] unexpected machine state, will restart: <nil>
I1211 15:23:00.596512    7437 out.go:177] * Restarting existing qemu2 VM for "functional-749000" ...
I1211 15:23:00.604478    7437 qemu.go:418] Using hvf for hardware acceleration
I1211 15:23:00.604653    7437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d4:e5:cb:9b:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/disk.qcow2
I1211 15:23:00.615186    7437 main.go:141] libmachine: STDOUT: 
I1211 15:23:00.615245    7437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1211 15:23:00.615332    7437 fix.go:56] duration metric: took 23.95275ms for fixHost
I1211 15:23:00.615348    7437 start.go:83] releasing machines lock for "functional-749000", held for 24.179584ms
W1211 15:23:00.615558    7437 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-749000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1211 15:23:00.622389    7437 out.go:201] 
W1211 15:23:00.626492    7437 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1211 15:23:00.626509    7437 out.go:270] * 
W1211 15:23:00.629052    7437 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1211 15:23:00.636581    7437 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd238858263/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-273000 | jenkins | v1.34.0 | 11 Dec 24 15:21 PST |                     |
|         | -p download-only-273000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:21 PST | 11 Dec 24 15:21 PST |
| delete  | -p download-only-273000                                                  | download-only-273000 | jenkins | v1.34.0 | 11 Dec 24 15:21 PST | 11 Dec 24 15:21 PST |
| start   | -o=json --download-only                                                  | download-only-352000 | jenkins | v1.34.0 | 11 Dec 24 15:21 PST |                     |
|         | -p download-only-352000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| delete  | -p download-only-352000                                                  | download-only-352000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| delete  | -p download-only-273000                                                  | download-only-273000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| delete  | -p download-only-352000                                                  | download-only-352000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| start   | --download-only -p                                                       | binary-mirror-893000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | binary-mirror-893000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:61210                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-893000                                                  | binary-mirror-893000 | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| addons  | enable dashboard -p                                                      | addons-645000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | addons-645000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-645000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | addons-645000                                                            |                      |         |         |                     |                     |
| start   | -p addons-645000 --wait=true                                             | addons-645000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-645000                                                         | addons-645000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| start   | -p nospam-911000 -n=1 --memory=2250 --wait=false                         | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-911000 --log_dir                                                  | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-911000                                                         | nospam-911000        | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| start   | -p functional-749000                                                     | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-749000                                                     | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-749000 cache add                                              | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-749000 cache add                                              | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-749000 cache add                                              | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-749000 cache add                                              | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | minikube-local-cache-test:functional-749000                              |                      |         |         |                     |                     |
| cache   | functional-749000 cache delete                                           | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | minikube-local-cache-test:functional-749000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| ssh     | functional-749000 ssh sudo                                               | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-749000                                                        | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-749000 ssh                                                    | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-749000 cache reload                                           | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
| ssh     | functional-749000 ssh                                                    | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:22 PST | 11 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-749000 kubectl --                                             | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | --context functional-749000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-749000                                                     | functional-749000    | jenkins | v1.34.0 | 11 Dec 24 15:22 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/12/11 15:22:55
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1211 15:22:55.488976    7437 out.go:345] Setting OutFile to fd 1 ...
I1211 15:22:55.489121    7437 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:22:55.489123    7437 out.go:358] Setting ErrFile to fd 2...
I1211 15:22:55.489125    7437 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:22:55.489237    7437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
I1211 15:22:55.490390    7437 out.go:352] Setting JSON to false
I1211 15:22:55.507996    7437 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4945,"bootTime":1733954430,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1211 15:22:55.508067    7437 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1211 15:22:55.513321    7437 out.go:177] * [functional-749000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1211 15:22:55.520373    7437 out.go:177]   - MINIKUBE_LOCATION=20083
I1211 15:22:55.520435    7437 notify.go:220] Checking for updates...
I1211 15:22:55.528327    7437 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
I1211 15:22:55.531351    7437 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1211 15:22:55.534403    7437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1211 15:22:55.537376    7437 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
I1211 15:22:55.540476    7437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1211 15:22:55.543680    7437 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1211 15:22:55.543728    7437 driver.go:394] Setting default libvirt URI to qemu:///system
I1211 15:22:55.547236    7437 out.go:177] * Using the qemu2 driver based on existing profile
I1211 15:22:55.554331    7437 start.go:297] selected driver: qemu2
I1211 15:22:55.554333    7437 start.go:901] validating driver "qemu2" against &{Name:functional-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1211 15:22:55.554374    7437 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1211 15:22:55.556927    7437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1211 15:22:55.556947    7437 cni.go:84] Creating CNI manager for ""
I1211 15:22:55.556980    7437 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1211 15:22:55.557037    7437 start.go:340] cluster config:
{Name:functional-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-749000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1211 15:22:55.561624    7437 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1211 15:22:55.570342    7437 out.go:177] * Starting "functional-749000" primary control-plane node in "functional-749000" cluster
I1211 15:22:55.574419    7437 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1211 15:22:55.574433    7437 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1211 15:22:55.574440    7437 cache.go:56] Caching tarball of preloaded images
I1211 15:22:55.574516    7437 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1211 15:22:55.574519    7437 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1211 15:22:55.574576    7437 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/functional-749000/config.json ...
I1211 15:22:55.575129    7437 start.go:360] acquireMachinesLock for functional-749000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1211 15:22:55.575172    7437 start.go:364] duration metric: took 39.208µs to acquireMachinesLock for "functional-749000"
I1211 15:22:55.575179    7437 start.go:96] Skipping create...Using existing machine configuration
I1211 15:22:55.575181    7437 fix.go:54] fixHost starting: 
I1211 15:22:55.575300    7437 fix.go:112] recreateIfNeeded on functional-749000: state=Stopped err=<nil>
W1211 15:22:55.575306    7437 fix.go:138] unexpected machine state, will restart: <nil>
I1211 15:22:55.582404    7437 out.go:177] * Restarting existing qemu2 VM for "functional-749000" ...
I1211 15:22:55.586319    7437 qemu.go:418] Using hvf for hardware acceleration
I1211 15:22:55.586358    7437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d4:e5:cb:9b:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/disk.qcow2
I1211 15:22:55.588619    7437 main.go:141] libmachine: STDOUT: 
I1211 15:22:55.588631    7437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1211 15:22:55.588661    7437 fix.go:56] duration metric: took 13.477792ms for fixHost
I1211 15:22:55.588665    7437 start.go:83] releasing machines lock for "functional-749000", held for 13.490333ms
W1211 15:22:55.588670    7437 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1211 15:22:55.588700    7437 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1211 15:22:55.588705    7437 start.go:729] Will try again in 5 seconds ...
I1211 15:23:00.590785    7437 start.go:360] acquireMachinesLock for functional-749000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1211 15:23:00.591155    7437 start.go:364] duration metric: took 320.458µs to acquireMachinesLock for "functional-749000"
I1211 15:23:00.591370    7437 start.go:96] Skipping create...Using existing machine configuration
I1211 15:23:00.591383    7437 fix.go:54] fixHost starting: 
I1211 15:23:00.592094    7437 fix.go:112] recreateIfNeeded on functional-749000: state=Stopped err=<nil>
W1211 15:23:00.592136    7437 fix.go:138] unexpected machine state, will restart: <nil>
I1211 15:23:00.596512    7437 out.go:177] * Restarting existing qemu2 VM for "functional-749000" ...
I1211 15:23:00.604478    7437 qemu.go:418] Using hvf for hardware acceleration
I1211 15:23:00.604653    7437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:d4:e5:cb:9b:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/functional-749000/disk.qcow2
I1211 15:23:00.615186    7437 main.go:141] libmachine: STDOUT: 
I1211 15:23:00.615245    7437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1211 15:23:00.615332    7437 fix.go:56] duration metric: took 23.95275ms for fixHost
I1211 15:23:00.615348    7437 start.go:83] releasing machines lock for "functional-749000", held for 24.179584ms
W1211 15:23:00.615558    7437 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-749000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1211 15:23:00.622389    7437 out.go:201] 
W1211 15:23:00.626492    7437 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1211 15:23:00.626509    7437 out.go:270] * 
W1211 15:23:00.629052    7437 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1211 15:23:00.636581    7437 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-749000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-749000 apply -f testdata/invalidsvc.yaml: exit status 1 (28.357ms)

                                                
                                                
** stderr ** 
	error: context "functional-749000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-749000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-749000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-749000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-749000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-749000 --alsologtostderr -v=1] stderr:
I1211 15:23:39.695640    7638 out.go:345] Setting OutFile to fd 1 ...
I1211 15:23:39.696102    7638 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:39.696105    7638 out.go:358] Setting ErrFile to fd 2...
I1211 15:23:39.696108    7638 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:39.696256    7638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
I1211 15:23:39.696534    7638 mustload.go:65] Loading cluster: functional-749000
I1211 15:23:39.696738    7638 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1211 15:23:39.701028    7638 out.go:177] * The control-plane node functional-749000 host is not running: state=Stopped
I1211 15:23:39.704918    7638 out.go:177]   To start a cluster, run: "minikube start -p functional-749000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (46.973625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 status: exit status 7 (78.663292ms)

                                                
                                                
-- stdout --
	functional-749000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-749000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (36.236291ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-749000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 status -o json: exit status 7 (34.921917ms)

                                                
                                                
-- stdout --
	{"Name":"functional-749000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-749000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (34.552875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-749000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-749000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (27.410333ms)

                                                
                                                
** stderr ** 
	error: context "functional-749000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-749000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-749000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-749000 describe po hello-node-connect: exit status 1 (27.064833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-749000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-749000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-749000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-749000 logs -l app=hello-node-connect: exit status 1 (26.866667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-749000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-749000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-749000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-749000 describe svc hello-node-connect: exit status 1 (27.004917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-749000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-749000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (34.417417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-749000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (40.54275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "echo hello": exit status 83 (61.442208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-749000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-749000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-749000\"\n"*. args "out/minikube-darwin-arm64 -p functional-749000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "cat /etc/hostname": exit status 83 (68.323917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-749000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-749000"- but got *"* The control-plane node functional-749000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-749000\"\n"*. args "out/minikube-darwin-arm64 -p functional-749000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (34.919375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (55.087125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-749000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh -n functional-749000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh -n functional-749000 "sudo cat /home/docker/cp-test.txt": exit status 83 (44.648708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-749000 ssh -n functional-749000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-749000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-749000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 cp functional-749000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2169432672/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 cp functional-749000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2169432672/001/cp-test.txt: exit status 83 (42.152083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-749000 cp functional-749000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2169432672/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh -n functional-749000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh -n functional-749000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.458542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-749000 ssh -n functional-749000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2169432672/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-749000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-749000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (44.013875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-749000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh -n functional-749000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh -n functional-749000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (54.74575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-749000 ssh -n functional-749000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-749000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-749000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7135/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /etc/test/nested/copy/7135/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /etc/test/nested/copy/7135/hosts": exit status 83 (47.988083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /etc/test/nested/copy/7135/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-749000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-749000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (34.628625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7135.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /etc/ssl/certs/7135.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /etc/ssl/certs/7135.pem": exit status 83 (53.468208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/7135.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-749000 ssh \"sudo cat /etc/ssl/certs/7135.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7135.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-749000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-749000"
	"""
)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7135.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /usr/share/ca-certificates/7135.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /usr/share/ca-certificates/7135.pem": exit status 83 (45.655875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/7135.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-749000 ssh \"sudo cat /usr/share/ca-certificates/7135.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7135.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-749000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-749000"
	"""
)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (49.807208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-749000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-749000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-749000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/71352.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /etc/ssl/certs/71352.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /etc/ssl/certs/71352.pem": exit status 83 (47.636542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/71352.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-749000 ssh \"sudo cat /etc/ssl/certs/71352.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/71352.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-749000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-749000"
	"""
)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/71352.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /usr/share/ca-certificates/71352.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /usr/share/ca-certificates/71352.pem": exit status 83 (45.649166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/71352.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-749000 ssh \"sudo cat /usr/share/ca-certificates/71352.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/71352.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-749000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-749000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (42.123333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-749000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-749000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-749000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (34.254041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-749000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-749000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (27.271542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-749000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-749000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-749000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-749000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-749000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-749000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-749000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-749000 -n functional-749000: exit status 7 (35.199959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-749000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "sudo systemctl is-active crio": exit status 83 (46.297209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-749000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-749000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-749000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-749000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I1211 15:23:01.319147    7485 out.go:345] Setting OutFile to fd 1 ...
I1211 15:23:01.319377    7485 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:01.319382    7485 out.go:358] Setting ErrFile to fd 2...
I1211 15:23:01.319384    7485 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:01.319525    7485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
I1211 15:23:01.319805    7485 mustload.go:65] Loading cluster: functional-749000
I1211 15:23:01.320068    7485 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1211 15:23:01.324357    7485 out.go:177] * The control-plane node functional-749000 host is not running: state=Stopped
I1211 15:23:01.339281    7485 out.go:177]   To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
stdout: * The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-749000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7484: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-749000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-749000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-749000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-749000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-749000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-749000": client config: context "functional-749000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (86.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1211 15:23:01.409329    7135 retry.go:31] will retry after 4.433012613s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-749000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-749000 get svc nginx-svc: exit status 1 (70.246ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-749000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-749000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (86.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-749000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-749000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.982958ms)

                                                
                                                
** stderr ** 
	error: context "functional-749000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-749000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 service list: exit status 83 (48.177625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-749000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-749000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-749000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 service list -o json: exit status 83 (46.816042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-749000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 service --namespace=default --https --url hello-node: exit status 83 (47.927875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-749000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 service hello-node --url --format={{.IP}}: exit status 83 (47.697916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-749000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-749000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-749000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 service hello-node --url: exit status 83 (46.86325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-749000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
functional_test.go:1569: failed to parse "* The control-plane node functional-749000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-749000\"": parse "* The control-plane node functional-749000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-749000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 version -o=json --components: exit status 83 (44.968833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-749000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-749000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-749000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-749000 image ls --format short --alsologtostderr:
I1211 15:23:44.887851    7762 out.go:345] Setting OutFile to fd 1 ...
I1211 15:23:44.887998    7762 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:44.888002    7762 out.go:358] Setting ErrFile to fd 2...
I1211 15:23:44.888004    7762 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:44.888131    7762 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
I1211 15:23:44.888528    7762 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1211 15:23:44.888602    7762 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-749000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-749000 image ls --format table --alsologtostderr:
I1211 15:23:45.130949    7776 out.go:345] Setting OutFile to fd 1 ...
I1211 15:23:45.131120    7776 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:45.131123    7776 out.go:358] Setting ErrFile to fd 2...
I1211 15:23:45.131126    7776 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:45.131254    7776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
I1211 15:23:45.131664    7776 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1211 15:23:45.131721    7776 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-749000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-749000 image ls --format json --alsologtostderr:
I1211 15:23:45.092202    7773 out.go:345] Setting OutFile to fd 1 ...
I1211 15:23:45.092377    7773 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:45.092380    7773 out.go:358] Setting ErrFile to fd 2...
I1211 15:23:45.092382    7773 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:45.092506    7773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
I1211 15:23:45.092888    7773 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1211 15:23:45.092948    7773 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-749000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-749000 image ls --format yaml --alsologtostderr:
I1211 15:23:44.927349    7765 out.go:345] Setting OutFile to fd 1 ...
I1211 15:23:44.927523    7765 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:44.927526    7765 out.go:358] Setting ErrFile to fd 2...
I1211 15:23:44.927529    7765 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:44.927648    7765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
I1211 15:23:44.928059    7765 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1211 15:23:44.928123    7765 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh pgrep buildkitd: exit status 83 (43.959542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image build -t localhost/my-image:functional-749000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-749000 image build -t localhost/my-image:functional-749000 testdata/build --alsologtostderr:
I1211 15:23:45.011098    7769 out.go:345] Setting OutFile to fd 1 ...
I1211 15:23:45.011582    7769 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:45.011586    7769 out.go:358] Setting ErrFile to fd 2...
I1211 15:23:45.011588    7769 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:23:45.011718    7769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
I1211 15:23:45.012111    7769 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1211 15:23:45.012655    7769 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1211 15:23:45.012883    7769 build_images.go:133] succeeded building to: 
I1211 15:23:45.012887    7769 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image ls
functional_test.go:446: expected "localhost/my-image:functional-749000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image load --daemon kicbase/echo-server:functional-749000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-749000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image load --daemon kicbase/echo-server:functional-749000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-749000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-749000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image load --daemon kicbase/echo-server:functional-749000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-749000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image save kicbase/echo-server:functional-749000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-749000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-749000 docker-env) && out/minikube-darwin-arm64 status -p functional-749000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-749000 docker-env) && out/minikube-darwin-arm64 status -p functional-749000": exit status 1 (45.922375ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 update-context --alsologtostderr -v=2: exit status 83 (48.868209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:23:45.170785    7779 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:23:45.171141    7779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:23:45.171145    7779 out.go:358] Setting ErrFile to fd 2...
	I1211 15:23:45.171147    7779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:23:45.171282    7779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:23:45.171462    7779 mustload.go:65] Loading cluster: functional-749000
	I1211 15:23:45.171642    7779 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:23:45.175067    7779 out.go:177] * The control-plane node functional-749000 host is not running: state=Stopped
	I1211 15:23:45.182122    7779 out.go:177]   To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-749000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-749000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-749000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 update-context --alsologtostderr -v=2: exit status 83 (46.445583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:23:45.264039    7783 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:23:45.264204    7783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:23:45.264208    7783 out.go:358] Setting ErrFile to fd 2...
	I1211 15:23:45.264210    7783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:23:45.264341    7783 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:23:45.264558    7783 mustload.go:65] Loading cluster: functional-749000
	I1211 15:23:45.264765    7783 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:23:45.269108    7783 out.go:177] * The control-plane node functional-749000 host is not running: state=Stopped
	I1211 15:23:45.273093    7783 out.go:177]   To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-749000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-749000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-749000\"\n", want=*"context has been updated"*
I1211 15:24:01.202852    7135 retry.go:31] will retry after 26.603642435s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 update-context --alsologtostderr -v=2: exit status 83 (43.662541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:23:45.219117    7781 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:23:45.219277    7781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:23:45.219280    7781 out.go:358] Setting ErrFile to fd 2...
	I1211 15:23:45.219283    7781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:23:45.219422    7781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:23:45.219626    7781 mustload.go:65] Loading cluster: functional-749000
	I1211 15:23:45.219827    7781 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:23:45.223161    7781 out.go:177] * The control-plane node functional-749000 host is not running: state=Stopped
	I1211 15:23:45.227135    7781 out.go:177]   To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-749000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-749000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-749000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1211 15:24:27.895550    7135 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.036333709s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 13 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1211 15:24:53.034944    7135 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1211 15:25:03.037151    7135 retry.go:31] will retry after 2.464216418s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1211 15:25:15.505651    7135 retry.go:31] will retry after 5.938882865s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:59301->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-978000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-978000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.043801792s)

                                                
                                                
-- stdout --
	* [ha-978000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-978000" primary control-plane node in "ha-978000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-978000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:25:23.443474    7827 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:25:23.443630    7827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:25:23.443634    7827 out.go:358] Setting ErrFile to fd 2...
	I1211 15:25:23.443636    7827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:25:23.443754    7827 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:25:23.444871    7827 out.go:352] Setting JSON to false
	I1211 15:25:23.462638    7827 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5093,"bootTime":1733954430,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:25:23.462711    7827 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:25:23.469986    7827 out.go:177] * [ha-978000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:25:23.477918    7827 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:25:23.477929    7827 notify.go:220] Checking for updates...
	I1211 15:25:23.483805    7827 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:25:23.487916    7827 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:25:23.490995    7827 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:25:23.493860    7827 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:25:23.496953    7827 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:25:23.500143    7827 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:25:23.503895    7827 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:25:23.510888    7827 start.go:297] selected driver: qemu2
	I1211 15:25:23.510893    7827 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:25:23.510898    7827 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:25:23.513448    7827 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:25:23.515899    7827 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:25:23.519975    7827 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:25:23.519992    7827 cni.go:84] Creating CNI manager for ""
	I1211 15:25:23.520011    7827 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1211 15:25:23.520018    7827 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 15:25:23.520051    7827 start.go:340] cluster config:
	{Name:ha-978000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:25:23.524643    7827 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:25:23.532910    7827 out.go:177] * Starting "ha-978000" primary control-plane node in "ha-978000" cluster
	I1211 15:25:23.544608    7827 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:25:23.544627    7827 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:25:23.544640    7827 cache.go:56] Caching tarball of preloaded images
	I1211 15:25:23.544737    7827 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:25:23.544745    7827 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:25:23.544991    7827 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/ha-978000/config.json ...
	I1211 15:25:23.545003    7827 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/ha-978000/config.json: {Name:mk7de3676816382a0e0ce5918f7b9183368db400 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:25:23.545580    7827 start.go:360] acquireMachinesLock for ha-978000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:25:23.545639    7827 start.go:364] duration metric: took 52.584µs to acquireMachinesLock for "ha-978000"
	I1211 15:25:23.545651    7827 start.go:93] Provisioning new machine with config: &{Name:ha-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:25:23.545701    7827 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:25:23.554939    7827 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:25:23.574120    7827 start.go:159] libmachine.API.Create for "ha-978000" (driver="qemu2")
	I1211 15:25:23.574146    7827 client.go:168] LocalClient.Create starting
	I1211 15:25:23.574222    7827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:25:23.574263    7827 main.go:141] libmachine: Decoding PEM data...
	I1211 15:25:23.574276    7827 main.go:141] libmachine: Parsing certificate...
	I1211 15:25:23.574321    7827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:25:23.574354    7827 main.go:141] libmachine: Decoding PEM data...
	I1211 15:25:23.574363    7827 main.go:141] libmachine: Parsing certificate...
	I1211 15:25:23.574958    7827 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:25:23.736027    7827 main.go:141] libmachine: Creating SSH key...
	I1211 15:25:23.919979    7827 main.go:141] libmachine: Creating Disk image...
	I1211 15:25:23.919987    7827 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:25:23.920251    7827 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2
	I1211 15:25:23.930568    7827 main.go:141] libmachine: STDOUT: 
	I1211 15:25:23.930590    7827 main.go:141] libmachine: STDERR: 
	I1211 15:25:23.930649    7827 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2 +20000M
	I1211 15:25:23.939209    7827 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:25:23.939225    7827 main.go:141] libmachine: STDERR: 
	I1211 15:25:23.939242    7827 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2
	I1211 15:25:23.939249    7827 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:25:23.939259    7827 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:25:23.939292    7827 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:7f:fe:19:ff:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2
	I1211 15:25:23.941107    7827 main.go:141] libmachine: STDOUT: 
	I1211 15:25:23.941123    7827 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:25:23.941143    7827 client.go:171] duration metric: took 366.996167ms to LocalClient.Create
	I1211 15:25:25.943313    7827 start.go:128] duration metric: took 2.397620583s to createHost
	I1211 15:25:25.943357    7827 start.go:83] releasing machines lock for "ha-978000", held for 2.397740459s
	W1211 15:25:25.943411    7827 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:25:25.960793    7827 out.go:177] * Deleting "ha-978000" in qemu2 ...
	W1211 15:25:25.990693    7827 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:25:25.990711    7827 start.go:729] Will try again in 5 seconds ...
	I1211 15:25:30.992883    7827 start.go:360] acquireMachinesLock for ha-978000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:25:30.993436    7827 start.go:364] duration metric: took 429.917µs to acquireMachinesLock for "ha-978000"
	I1211 15:25:30.993551    7827 start.go:93] Provisioning new machine with config: &{Name:ha-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:25:30.993842    7827 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:25:31.012783    7827 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:25:31.063101    7827 start.go:159] libmachine.API.Create for "ha-978000" (driver="qemu2")
	I1211 15:25:31.063144    7827 client.go:168] LocalClient.Create starting
	I1211 15:25:31.063263    7827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:25:31.063348    7827 main.go:141] libmachine: Decoding PEM data...
	I1211 15:25:31.063373    7827 main.go:141] libmachine: Parsing certificate...
	I1211 15:25:31.063447    7827 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:25:31.063506    7827 main.go:141] libmachine: Decoding PEM data...
	I1211 15:25:31.063534    7827 main.go:141] libmachine: Parsing certificate...
	I1211 15:25:31.064266    7827 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:25:31.237771    7827 main.go:141] libmachine: Creating SSH key...
	I1211 15:25:31.372040    7827 main.go:141] libmachine: Creating Disk image...
	I1211 15:25:31.372048    7827 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:25:31.372274    7827 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2
	I1211 15:25:31.382291    7827 main.go:141] libmachine: STDOUT: 
	I1211 15:25:31.382312    7827 main.go:141] libmachine: STDERR: 
	I1211 15:25:31.382369    7827 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2 +20000M
	I1211 15:25:31.391073    7827 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:25:31.391093    7827 main.go:141] libmachine: STDERR: 
	I1211 15:25:31.391109    7827 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2
	I1211 15:25:31.391113    7827 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:25:31.391120    7827 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:25:31.391151    7827 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:ea:48:4c:c6:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2
	I1211 15:25:31.393175    7827 main.go:141] libmachine: STDOUT: 
	I1211 15:25:31.393195    7827 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:25:31.393210    7827 client.go:171] duration metric: took 330.063375ms to LocalClient.Create
	I1211 15:25:33.395368    7827 start.go:128] duration metric: took 2.401524s to createHost
	I1211 15:25:33.395438    7827 start.go:83] releasing machines lock for "ha-978000", held for 2.401999s
	W1211 15:25:33.395875    7827 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:25:33.417468    7827 out.go:201] 
	W1211 15:25:33.427284    7827 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:25:33.427325    7827 out.go:270] * 
	* 
	W1211 15:25:33.430291    7827 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:25:33.440468    7827 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-978000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (69.983458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (80.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (64.980917ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-978000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- rollout status deployment/busybox: exit status 1 (62.741459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.833459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:25:33.716562    7135 retry.go:31] will retry after 942.159069ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.526292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:25:34.769570    7135 retry.go:31] will retry after 1.191662977s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.2365ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:25:36.070846    7135 retry.go:31] will retry after 2.85986229s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.811208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:25:39.042922    7135 retry.go:31] will retry after 2.133185825s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.26625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:25:41.286809    7135 retry.go:31] will retry after 4.012209974s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.926791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:25:45.408304    7135 retry.go:31] will retry after 10.992355841s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.3215ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:25:56.514165    7135 retry.go:31] will retry after 11.21015375s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.166834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:26:07.837826    7135 retry.go:31] will retry after 13.633907632s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.083667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:26:21.581106    7135 retry.go:31] will retry after 32.366276469s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.594875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.334958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.676084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec  -- nslookup kubernetes.default: exit status 1 (63.021917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.871125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (34.661959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (80.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.089625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-978000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (34.347167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-978000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-978000 -v=7 --alsologtostderr: exit status 83 (48.783416ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-978000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-978000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:26:54.472913    7922 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:26:54.473317    7922 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:54.473321    7922 out.go:358] Setting ErrFile to fd 2...
	I1211 15:26:54.473324    7922 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:54.473491    7922 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:26:54.473732    7922 mustload.go:65] Loading cluster: ha-978000
	I1211 15:26:54.473943    7922 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:26:54.479055    7922 out.go:177] * The control-plane node ha-978000 host is not running: state=Stopped
	I1211 15:26:54.482942    7922 out.go:177]   To start a cluster, run: "minikube start -p ha-978000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-978000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (34.976958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-978000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-978000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.095292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-978000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-978000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-978000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (35.0545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-978000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-978000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-978000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-978000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-978000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-978000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-978000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-978000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (35.0115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status --output json -v=7 --alsologtostderr: exit status 7 (35.155792ms)

                                                
                                                
-- stdout --
	{"Name":"ha-978000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:26:54.707070    7934 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:26:54.707254    7934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:54.707257    7934 out.go:358] Setting ErrFile to fd 2...
	I1211 15:26:54.707259    7934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:54.707394    7934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:26:54.707519    7934 out.go:352] Setting JSON to true
	I1211 15:26:54.707533    7934 mustload.go:65] Loading cluster: ha-978000
	I1211 15:26:54.707599    7934 notify.go:220] Checking for updates...
	I1211 15:26:54.707735    7934 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:26:54.707743    7934 status.go:174] checking status of ha-978000 ...
	I1211 15:26:54.707983    7934 status.go:371] ha-978000 host status = "Stopped" (err=<nil>)
	I1211 15:26:54.707987    7934 status.go:384] host is not running, skipping remaining checks
	I1211 15:26:54.707989    7934 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:335: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-978000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (35.078334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 node stop m02 -v=7 --alsologtostderr: exit status 85 (49.522292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:26:54.777848    7938 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:26:54.778254    7938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:54.778258    7938 out.go:358] Setting ErrFile to fd 2...
	I1211 15:26:54.778261    7938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:54.778411    7938 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:26:54.778686    7938 mustload.go:65] Loading cluster: ha-978000
	I1211 15:26:54.778892    7938 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:26:54.782302    7938 out.go:201] 
	W1211 15:26:54.785401    7938 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1211 15:26:54.785405    7938 out.go:270] * 
	* 
	W1211 15:26:54.787131    7938 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:26:54.790443    7938 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-978000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (35.159167ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:26:54.826692    7940 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:26:54.826893    7940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:54.826896    7940 out.go:358] Setting ErrFile to fd 2...
	I1211 15:26:54.826898    7940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:54.827012    7940 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:26:54.827150    7940 out.go:352] Setting JSON to false
	I1211 15:26:54.827160    7940 mustload.go:65] Loading cluster: ha-978000
	I1211 15:26:54.827225    7940 notify.go:220] Checking for updates...
	I1211 15:26:54.827930    7940 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:26:54.827960    7940 status.go:174] checking status of ha-978000 ...
	I1211 15:26:54.828492    7940 status.go:371] ha-978000 host status = "Stopped" (err=<nil>)
	I1211 15:26:54.828497    7940 status.go:384] host is not running, skipping remaining checks
	I1211 15:26:54.828500    7940 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": ha-978000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": ha-978000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": ha-978000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": ha-978000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (34.276166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-978000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-978000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-978000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-978000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (34.004167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (56.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 node start m02 -v=7 --alsologtostderr: exit status 85 (51.731667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:26:54.983324    7949 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:26:54.983765    7949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:54.983769    7949 out.go:358] Setting ErrFile to fd 2...
	I1211 15:26:54.983771    7949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:54.983928    7949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:26:54.984152    7949 mustload.go:65] Loading cluster: ha-978000
	I1211 15:26:54.984340    7949 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:26:54.987523    7949 out.go:201] 
	W1211 15:26:54.990445    7949 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1211 15:26:54.990450    7949 out.go:270] * 
	* 
	W1211 15:26:54.992818    7949 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:26:54.997400    7949 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1211 15:26:54.983324    7949 out.go:345] Setting OutFile to fd 1 ...
I1211 15:26:54.983765    7949 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:26:54.983769    7949 out.go:358] Setting ErrFile to fd 2...
I1211 15:26:54.983771    7949 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:26:54.983928    7949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
I1211 15:26:54.984152    7949 mustload.go:65] Loading cluster: ha-978000
I1211 15:26:54.984340    7949 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1211 15:26:54.987523    7949 out.go:201] 
W1211 15:26:54.990445    7949 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1211 15:26:54.990450    7949 out.go:270] * 
* 
W1211 15:26:54.992818    7949 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1211 15:26:54.997400    7949 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-978000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (35.602833ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:26:55.036389    7951 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:26:55.036575    7951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:55.036578    7951 out.go:358] Setting ErrFile to fd 2...
	I1211 15:26:55.036581    7951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:55.036723    7951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:26:55.036856    7951 out.go:352] Setting JSON to false
	I1211 15:26:55.036866    7951 mustload.go:65] Loading cluster: ha-978000
	I1211 15:26:55.036921    7951 notify.go:220] Checking for updates...
	I1211 15:26:55.037074    7951 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:26:55.037082    7951 status.go:174] checking status of ha-978000 ...
	I1211 15:26:55.037307    7951 status.go:371] ha-978000 host status = "Stopped" (err=<nil>)
	I1211 15:26:55.037311    7951 status.go:384] host is not running, skipping remaining checks
	I1211 15:26:55.037313    7951 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:26:55.038203    7135 retry.go:31] will retry after 1.155602649s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (78.675708ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:26:56.272652    7953 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:26:56.272889    7953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:56.272893    7953 out.go:358] Setting ErrFile to fd 2...
	I1211 15:26:56.272896    7953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:56.273051    7953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:26:56.273227    7953 out.go:352] Setting JSON to false
	I1211 15:26:56.273239    7953 mustload.go:65] Loading cluster: ha-978000
	I1211 15:26:56.273265    7953 notify.go:220] Checking for updates...
	I1211 15:26:56.273481    7953 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:26:56.273491    7953 status.go:174] checking status of ha-978000 ...
	I1211 15:26:56.273774    7953 status.go:371] ha-978000 host status = "Stopped" (err=<nil>)
	I1211 15:26:56.273779    7953 status.go:384] host is not running, skipping remaining checks
	I1211 15:26:56.273781    7953 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:26:56.274809    7135 retry.go:31] will retry after 1.671619527s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (79.004709ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:26:58.025668    7955 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:26:58.025910    7955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:58.025914    7955 out.go:358] Setting ErrFile to fd 2...
	I1211 15:26:58.025917    7955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:26:58.026088    7955 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:26:58.026248    7955 out.go:352] Setting JSON to false
	I1211 15:26:58.026262    7955 mustload.go:65] Loading cluster: ha-978000
	I1211 15:26:58.026313    7955 notify.go:220] Checking for updates...
	I1211 15:26:58.026548    7955 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:26:58.026563    7955 status.go:174] checking status of ha-978000 ...
	I1211 15:26:58.026893    7955 status.go:371] ha-978000 host status = "Stopped" (err=<nil>)
	I1211 15:26:58.026898    7955 status.go:384] host is not running, skipping remaining checks
	I1211 15:26:58.026900    7955 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:26:58.027923    7135 retry.go:31] will retry after 2.520561605s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (80.941833ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:27:00.629733    7957 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:27:00.629968    7957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:00.629973    7957 out.go:358] Setting ErrFile to fd 2...
	I1211 15:27:00.629976    7957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:00.630140    7957 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:27:00.630302    7957 out.go:352] Setting JSON to false
	I1211 15:27:00.630315    7957 mustload.go:65] Loading cluster: ha-978000
	I1211 15:27:00.630349    7957 notify.go:220] Checking for updates...
	I1211 15:27:00.630564    7957 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:27:00.630574    7957 status.go:174] checking status of ha-978000 ...
	I1211 15:27:00.630885    7957 status.go:371] ha-978000 host status = "Stopped" (err=<nil>)
	I1211 15:27:00.630889    7957 status.go:384] host is not running, skipping remaining checks
	I1211 15:27:00.630892    7957 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:27:00.631990    7135 retry.go:31] will retry after 3.88605344s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (80.324958ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:27:04.598548    7960 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:27:04.598785    7960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:04.598789    7960 out.go:358] Setting ErrFile to fd 2...
	I1211 15:27:04.598792    7960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:04.598946    7960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:27:04.599123    7960 out.go:352] Setting JSON to false
	I1211 15:27:04.599137    7960 mustload.go:65] Loading cluster: ha-978000
	I1211 15:27:04.599173    7960 notify.go:220] Checking for updates...
	I1211 15:27:04.599419    7960 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:27:04.599430    7960 status.go:174] checking status of ha-978000 ...
	I1211 15:27:04.599745    7960 status.go:371] ha-978000 host status = "Stopped" (err=<nil>)
	I1211 15:27:04.599750    7960 status.go:384] host is not running, skipping remaining checks
	I1211 15:27:04.599752    7960 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:27:04.600794    7135 retry.go:31] will retry after 4.282412112s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (78.146792ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:27:08.961596    7962 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:27:08.961808    7962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:08.961812    7962 out.go:358] Setting ErrFile to fd 2...
	I1211 15:27:08.961814    7962 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:08.962015    7962 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:27:08.962166    7962 out.go:352] Setting JSON to false
	I1211 15:27:08.962178    7962 mustload.go:65] Loading cluster: ha-978000
	I1211 15:27:08.962217    7962 notify.go:220] Checking for updates...
	I1211 15:27:08.962430    7962 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:27:08.962440    7962 status.go:174] checking status of ha-978000 ...
	I1211 15:27:08.962757    7962 status.go:371] ha-978000 host status = "Stopped" (err=<nil>)
	I1211 15:27:08.962761    7962 status.go:384] host is not running, skipping remaining checks
	I1211 15:27:08.962764    7962 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:27:08.963761    7135 retry.go:31] will retry after 7.756838641s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (80.185667ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:27:16.801110    7968 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:27:16.801305    7968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:16.801309    7968 out.go:358] Setting ErrFile to fd 2...
	I1211 15:27:16.801312    7968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:16.801486    7968 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:27:16.801640    7968 out.go:352] Setting JSON to false
	I1211 15:27:16.801655    7968 mustload.go:65] Loading cluster: ha-978000
	I1211 15:27:16.801687    7968 notify.go:220] Checking for updates...
	I1211 15:27:16.801910    7968 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:27:16.801919    7968 status.go:174] checking status of ha-978000 ...
	I1211 15:27:16.802216    7968 status.go:371] ha-978000 host status = "Stopped" (err=<nil>)
	I1211 15:27:16.802220    7968 status.go:384] host is not running, skipping remaining checks
	I1211 15:27:16.802222    7968 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:27:16.803206    7135 retry.go:31] will retry after 11.155033493s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (77.342167ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:27:28.035785    7973 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:27:28.035992    7973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:28.035997    7973 out.go:358] Setting ErrFile to fd 2...
	I1211 15:27:28.036000    7973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:28.036158    7973 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:27:28.036300    7973 out.go:352] Setting JSON to false
	I1211 15:27:28.036312    7973 mustload.go:65] Loading cluster: ha-978000
	I1211 15:27:28.036339    7973 notify.go:220] Checking for updates...
	I1211 15:27:28.036559    7973 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:27:28.036569    7973 status.go:174] checking status of ha-978000 ...
	I1211 15:27:28.036883    7973 status.go:371] ha-978000 host status = "Stopped" (err=<nil>)
	I1211 15:27:28.036888    7973 status.go:384] host is not running, skipping remaining checks
	I1211 15:27:28.036890    7973 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:27:28.037925    7135 retry.go:31] will retry after 23.286382647s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (78.927625ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:27:51.403073    7977 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:27:51.403318    7977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:51.403322    7977 out.go:358] Setting ErrFile to fd 2...
	I1211 15:27:51.403326    7977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:51.403501    7977 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:27:51.403661    7977 out.go:352] Setting JSON to false
	I1211 15:27:51.403674    7977 mustload.go:65] Loading cluster: ha-978000
	I1211 15:27:51.403723    7977 notify.go:220] Checking for updates...
	I1211 15:27:51.403968    7977 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:27:51.403979    7977 status.go:174] checking status of ha-978000 ...
	I1211 15:27:51.404315    7977 status.go:371] ha-978000 host status = "Stopped" (err=<nil>)
	I1211 15:27:51.404319    7977 status.go:384] host is not running, skipping remaining checks
	I1211 15:27:51.404322    7977 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (37.13575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (56.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-978000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-978000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-978000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-978000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-978000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-978000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-978000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-978000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (34.212916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-978000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-978000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-978000 -v=7 --alsologtostderr: (3.661909791s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-978000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-978000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.230075542s)

                                                
                                                
-- stdout --
	* [ha-978000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-978000" primary control-plane node in "ha-978000" cluster
	* Restarting existing qemu2 VM for "ha-978000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-978000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:27:55.293337    8008 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:27:55.293541    8008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:55.293546    8008 out.go:358] Setting ErrFile to fd 2...
	I1211 15:27:55.293548    8008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:27:55.293707    8008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:27:55.294922    8008 out.go:352] Setting JSON to false
	I1211 15:27:55.314650    8008 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5245,"bootTime":1733954430,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:27:55.314723    8008 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:27:55.320027    8008 out.go:177] * [ha-978000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:27:55.327844    8008 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:27:55.327887    8008 notify.go:220] Checking for updates...
	I1211 15:27:55.335843    8008 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:27:55.338907    8008 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:27:55.342861    8008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:27:55.345922    8008 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:27:55.348889    8008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:27:55.352153    8008 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:27:55.352206    8008 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:27:55.355866    8008 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:27:55.362884    8008 start.go:297] selected driver: qemu2
	I1211 15:27:55.362890    8008 start.go:901] validating driver "qemu2" against &{Name:ha-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:27:55.362957    8008 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:27:55.365502    8008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:27:55.365525    8008 cni.go:84] Creating CNI manager for ""
	I1211 15:27:55.365552    8008 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1211 15:27:55.365604    8008 start.go:340] cluster config:
	{Name:ha-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-978000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:27:55.370177    8008 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:27:55.378866    8008 out.go:177] * Starting "ha-978000" primary control-plane node in "ha-978000" cluster
	I1211 15:27:55.381869    8008 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:27:55.381888    8008 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:27:55.381902    8008 cache.go:56] Caching tarball of preloaded images
	I1211 15:27:55.381975    8008 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:27:55.381980    8008 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:27:55.382044    8008 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/ha-978000/config.json ...
	I1211 15:27:55.382500    8008 start.go:360] acquireMachinesLock for ha-978000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:27:55.382548    8008 start.go:364] duration metric: took 42.041µs to acquireMachinesLock for "ha-978000"
	I1211 15:27:55.382556    8008 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:27:55.382560    8008 fix.go:54] fixHost starting: 
	I1211 15:27:55.382671    8008 fix.go:112] recreateIfNeeded on ha-978000: state=Stopped err=<nil>
	W1211 15:27:55.382679    8008 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:27:55.389782    8008 out.go:177] * Restarting existing qemu2 VM for "ha-978000" ...
	I1211 15:27:55.393830    8008 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:27:55.393873    8008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:ea:48:4c:c6:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2
	I1211 15:27:55.396047    8008 main.go:141] libmachine: STDOUT: 
	I1211 15:27:55.396067    8008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:27:55.396106    8008 fix.go:56] duration metric: took 13.54275ms for fixHost
	I1211 15:27:55.396111    8008 start.go:83] releasing machines lock for "ha-978000", held for 13.559583ms
	W1211 15:27:55.396118    8008 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:27:55.396151    8008 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:27:55.396155    8008 start.go:729] Will try again in 5 seconds ...
	I1211 15:28:00.398309    8008 start.go:360] acquireMachinesLock for ha-978000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:28:00.398844    8008 start.go:364] duration metric: took 399.416µs to acquireMachinesLock for "ha-978000"
	I1211 15:28:00.398975    8008 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:28:00.398997    8008 fix.go:54] fixHost starting: 
	I1211 15:28:00.399814    8008 fix.go:112] recreateIfNeeded on ha-978000: state=Stopped err=<nil>
	W1211 15:28:00.399842    8008 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:28:00.404334    8008 out.go:177] * Restarting existing qemu2 VM for "ha-978000" ...
	I1211 15:28:00.411278    8008 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:28:00.411494    8008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:ea:48:4c:c6:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2
	I1211 15:28:00.422288    8008 main.go:141] libmachine: STDOUT: 
	I1211 15:28:00.422347    8008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:28:00.422418    8008 fix.go:56] duration metric: took 23.425458ms for fixHost
	I1211 15:28:00.422435    8008 start.go:83] releasing machines lock for "ha-978000", held for 23.570375ms
	W1211 15:28:00.422612    8008 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:28:00.430230    8008 out.go:201] 
	W1211 15:28:00.434297    8008 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:28:00.434325    8008 out.go:270] * 
	* 
	W1211 15:28:00.437143    8008 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:28:00.442301    8008 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-978000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-978000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (36.907041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 node delete m03 -v=7 --alsologtostderr: exit status 83 (45.669875ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-978000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-978000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:28:00.605637    8024 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:28:00.606111    8024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:28:00.606115    8024 out.go:358] Setting ErrFile to fd 2...
	I1211 15:28:00.606118    8024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:28:00.606292    8024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:28:00.606524    8024 mustload.go:65] Loading cluster: ha-978000
	I1211 15:28:00.606724    8024 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:28:00.610871    8024 out.go:177] * The control-plane node ha-978000 host is not running: state=Stopped
	I1211 15:28:00.613852    8024 out.go:177]   To start a cluster, run: "minikube start -p ha-978000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-978000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (34.618917ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:28:00.650670    8026 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:28:00.650841    8026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:28:00.650845    8026 out.go:358] Setting ErrFile to fd 2...
	I1211 15:28:00.650847    8026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:28:00.650989    8026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:28:00.651135    8026 out.go:352] Setting JSON to false
	I1211 15:28:00.651149    8026 mustload.go:65] Loading cluster: ha-978000
	I1211 15:28:00.651189    8026 notify.go:220] Checking for updates...
	I1211 15:28:00.651357    8026 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:28:00.651366    8026 status.go:174] checking status of ha-978000 ...
	I1211 15:28:00.651627    8026 status.go:371] ha-978000 host status = "Stopped" (err=<nil>)
	I1211 15:28:00.651631    8026 status.go:384] host is not running, skipping remaining checks
	I1211 15:28:00.651633    8026 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (35.2725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-978000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-978000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-978000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-978000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (35.20375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-978000 stop -v=7 --alsologtostderr: (3.083715542s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (72.188708ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:28:03.930343    8053 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:28:03.930574    8053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:28:03.930578    8053 out.go:358] Setting ErrFile to fd 2...
	I1211 15:28:03.930581    8053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:28:03.930742    8053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:28:03.930889    8053 out.go:352] Setting JSON to false
	I1211 15:28:03.930901    8053 mustload.go:65] Loading cluster: ha-978000
	I1211 15:28:03.930950    8053 notify.go:220] Checking for updates...
	I1211 15:28:03.931133    8053 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:28:03.931142    8053 status.go:174] checking status of ha-978000 ...
	I1211 15:28:03.931457    8053 status.go:371] ha-978000 host status = "Stopped" (err=<nil>)
	I1211 15:28:03.931461    8053 status.go:384] host is not running, skipping remaining checks
	I1211 15:28:03.931464    8053 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": ha-978000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": ha-978000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": ha-978000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (36.72225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-978000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-978000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.187060833s)

                                                
                                                
-- stdout --
	* [ha-978000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-978000" primary control-plane node in "ha-978000" cluster
	* Restarting existing qemu2 VM for "ha-978000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-978000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:28:04.001748    8057 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:28:04.001933    8057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:28:04.001936    8057 out.go:358] Setting ErrFile to fd 2...
	I1211 15:28:04.001939    8057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:28:04.002076    8057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:28:04.003113    8057 out.go:352] Setting JSON to false
	I1211 15:28:04.020563    8057 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5254,"bootTime":1733954430,"procs":531,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:28:04.020638    8057 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:28:04.024463    8057 out.go:177] * [ha-978000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:28:04.032284    8057 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:28:04.032309    8057 notify.go:220] Checking for updates...
	I1211 15:28:04.039323    8057 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:28:04.042287    8057 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:28:04.045369    8057 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:28:04.046754    8057 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:28:04.050338    8057 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:28:04.053692    8057 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:28:04.053965    8057 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:28:04.057302    8057 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:28:04.064304    8057 start.go:297] selected driver: qemu2
	I1211 15:28:04.064309    8057 start.go:901] validating driver "qemu2" against &{Name:ha-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-978000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:28:04.064349    8057 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:28:04.066746    8057 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:28:04.066768    8057 cni.go:84] Creating CNI manager for ""
	I1211 15:28:04.066790    8057 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1211 15:28:04.066840    8057 start.go:340] cluster config:
	{Name:ha-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-978000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:28:04.071128    8057 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:28:04.079327    8057 out.go:177] * Starting "ha-978000" primary control-plane node in "ha-978000" cluster
	I1211 15:28:04.083349    8057 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:28:04.083369    8057 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:28:04.083382    8057 cache.go:56] Caching tarball of preloaded images
	I1211 15:28:04.083452    8057 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:28:04.083458    8057 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:28:04.083515    8057 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/ha-978000/config.json ...
	I1211 15:28:04.083982    8057 start.go:360] acquireMachinesLock for ha-978000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:28:04.084014    8057 start.go:364] duration metric: took 25.542µs to acquireMachinesLock for "ha-978000"
	I1211 15:28:04.084023    8057 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:28:04.084028    8057 fix.go:54] fixHost starting: 
	I1211 15:28:04.084167    8057 fix.go:112] recreateIfNeeded on ha-978000: state=Stopped err=<nil>
	W1211 15:28:04.084175    8057 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:28:04.090291    8057 out.go:177] * Restarting existing qemu2 VM for "ha-978000" ...
	I1211 15:28:04.094346    8057 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:28:04.094386    8057 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:ea:48:4c:c6:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2
	I1211 15:28:04.096691    8057 main.go:141] libmachine: STDOUT: 
	I1211 15:28:04.096712    8057 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:28:04.096742    8057 fix.go:56] duration metric: took 12.711959ms for fixHost
	I1211 15:28:04.096746    8057 start.go:83] releasing machines lock for "ha-978000", held for 12.727958ms
	W1211 15:28:04.096751    8057 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:28:04.096796    8057 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:28:04.096801    8057 start.go:729] Will try again in 5 seconds ...
	I1211 15:28:09.099004    8057 start.go:360] acquireMachinesLock for ha-978000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:28:09.099465    8057 start.go:364] duration metric: took 368.791µs to acquireMachinesLock for "ha-978000"
	I1211 15:28:09.099588    8057 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:28:09.099606    8057 fix.go:54] fixHost starting: 
	I1211 15:28:09.100315    8057 fix.go:112] recreateIfNeeded on ha-978000: state=Stopped err=<nil>
	W1211 15:28:09.100339    8057 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:28:09.104826    8057 out.go:177] * Restarting existing qemu2 VM for "ha-978000" ...
	I1211 15:28:09.111760    8057 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:28:09.111994    8057 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:ea:48:4c:c6:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/ha-978000/disk.qcow2
	I1211 15:28:09.121606    8057 main.go:141] libmachine: STDOUT: 
	I1211 15:28:09.121677    8057 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:28:09.121736    8057 fix.go:56] duration metric: took 22.133167ms for fixHost
	I1211 15:28:09.121748    8057 start.go:83] releasing machines lock for "ha-978000", held for 22.261916ms
	W1211 15:28:09.121914    8057 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:28:09.128716    8057 out.go:201] 
	W1211 15:28:09.132943    8057 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:28:09.132969    8057 out.go:270] * 
	* 
	W1211 15:28:09.135361    8057 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:28:09.143784    8057 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-978000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (77.185167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-978000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-978000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-978000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-978000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (34.536959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-978000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-978000 --control-plane -v=7 --alsologtostderr: exit status 83 (43.965792ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-978000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-978000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:28:09.358326    8072 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:28:09.358549    8072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:28:09.358552    8072 out.go:358] Setting ErrFile to fd 2...
	I1211 15:28:09.358554    8072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:28:09.358673    8072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:28:09.358927    8072 mustload.go:65] Loading cluster: ha-978000
	I1211 15:28:09.359147    8072 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:28:09.362150    8072 out.go:177] * The control-plane node ha-978000 host is not running: state=Stopped
	I1211 15:28:09.366015    8072 out.go:177]   To start a cluster, run: "minikube start -p ha-978000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-978000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (34.812417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-978000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-978000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-978000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-978000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-978000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-978000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-978000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-978000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (34.708417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-976000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-976000 --driver=qemu2 : exit status 80 (9.909913625s)

                                                
                                                
-- stdout --
	* [image-976000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-976000" primary control-plane node in "image-976000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-976000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-976000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-976000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-976000 -n image-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-976000 -n image-976000: exit status 7 (75.1225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.99s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-177000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-177000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.780004583s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e26c1a2e-8fed-4290-b704-4afb1e9cdc9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-177000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4866745f-bc21-43d8-8f6e-cf51280038d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20083"}}
	{"specversion":"1.0","id":"bf99ebb5-b175-4acf-ab5e-84f06f3a9cad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig"}}
	{"specversion":"1.0","id":"8d78bc12-7ebb-444e-9b46-36bef37e0f98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c8bf8b7e-c59e-42e2-93f4-e51fddb3925c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8a94794d-8f69-4f4a-8398-83419229974b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube"}}
	{"specversion":"1.0","id":"f54fa05a-6b43-4a62-ad7f-5fb89a78dc2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3529dea4-26ca-4b70-882c-d8bf05094de6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f1dd6b3c-3ed0-4eda-8738-9dde6bb9a153","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"29d329a0-6370-478d-b12e-9eb33929051a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-177000\" primary control-plane node in \"json-output-177000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"17c55511-ae89-459f-b449-565bdb559609","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"d1280af2-cc13-465f-b253-c597738efbaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-177000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"205e4c18-cf33-4cfe-97c6-de953838fc00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"5d520010-4192-4e70-bbe8-e105c11a06df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"09256f97-e395-48ed-ad76-3028b0a7d6c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-177000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"60465358-f2c3-4bd8-aef9-617da64ce885","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"565041b1-6a98-4a87-82a1-419e5d2b1815","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-177000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-177000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-177000 --output=json --user=testUser: exit status 83 (83.727875ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"45866721-b256-4a57-9828-98fa7eb99f3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-177000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"9c5aeba8-46e2-46bd-97be-4a0c270c53de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-177000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-177000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-177000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-177000 --output=json --user=testUser: exit status 83 (49.519083ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-177000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-177000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-177000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-177000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-290000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-290000 --driver=qemu2 : exit status 80 (9.898193875s)

                                                
                                                
-- stdout --
	* [first-290000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-290000" primary control-plane node in "first-290000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-290000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-290000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-290000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-11 15:28:41.657972 -0800 PST m=+431.441433460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-292000 -n second-292000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-292000 -n second-292000: exit status 85 (84.663625ms)

                                                
                                                
-- stdout --
	* Profile "second-292000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-292000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-292000" host is not running, skipping log retrieval (state="* Profile \"second-292000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-292000\"")
helpers_test.go:175: Cleaning up "second-292000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-292000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-11 15:28:41.859767 -0800 PST m=+431.643230626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-290000 -n first-290000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-290000 -n first-290000: exit status 7 (35.306416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-290000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-290000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-290000
--- FAIL: TestMinikubeProfile (10.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-728000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-728000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.19741175s)

                                                
                                                
-- stdout --
	* [mount-start-1-728000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-728000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-728000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-728000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-728000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-728000 -n mount-start-1-728000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-728000 -n mount-start-1-728000: exit status 7 (76.137583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-728000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-921000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-921000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.908985708s)

                                                
                                                
-- stdout --
	* [multinode-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-921000" primary control-plane node in "multinode-921000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-921000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:28:52.482955    8214 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:28:52.483118    8214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:28:52.483121    8214 out.go:358] Setting ErrFile to fd 2...
	I1211 15:28:52.483123    8214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:28:52.483256    8214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:28:52.484375    8214 out.go:352] Setting JSON to false
	I1211 15:28:52.501942    8214 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5302,"bootTime":1733954430,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:28:52.502010    8214 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:28:52.509981    8214 out.go:177] * [multinode-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:28:52.518973    8214 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:28:52.519049    8214 notify.go:220] Checking for updates...
	I1211 15:28:52.527862    8214 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:28:52.530946    8214 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:28:52.533983    8214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:28:52.536970    8214 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:28:52.539944    8214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:28:52.543174    8214 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:28:52.546889    8214 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:28:52.553955    8214 start.go:297] selected driver: qemu2
	I1211 15:28:52.553960    8214 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:28:52.553966    8214 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:28:52.556463    8214 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:28:52.559940    8214 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:28:52.563041    8214 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:28:52.563063    8214 cni.go:84] Creating CNI manager for ""
	I1211 15:28:52.563099    8214 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1211 15:28:52.563106    8214 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 15:28:52.563142    8214 start.go:340] cluster config:
	{Name:multinode-921000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:28:52.568117    8214 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:28:52.576968    8214 out.go:177] * Starting "multinode-921000" primary control-plane node in "multinode-921000" cluster
	I1211 15:28:52.580974    8214 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:28:52.581011    8214 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:28:52.581022    8214 cache.go:56] Caching tarball of preloaded images
	I1211 15:28:52.581114    8214 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:28:52.581120    8214 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:28:52.581318    8214 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/multinode-921000/config.json ...
	I1211 15:28:52.581338    8214 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/multinode-921000/config.json: {Name:mkff50258b3d19012f637d815a814ae6fd67ace0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:28:52.581633    8214 start.go:360] acquireMachinesLock for multinode-921000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:28:52.581685    8214 start.go:364] duration metric: took 45.541µs to acquireMachinesLock for "multinode-921000"
	I1211 15:28:52.581699    8214 start.go:93] Provisioning new machine with config: &{Name:multinode-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:28:52.581730    8214 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:28:52.587261    8214 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:28:52.605106    8214 start.go:159] libmachine.API.Create for "multinode-921000" (driver="qemu2")
	I1211 15:28:52.605134    8214 client.go:168] LocalClient.Create starting
	I1211 15:28:52.605215    8214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:28:52.605255    8214 main.go:141] libmachine: Decoding PEM data...
	I1211 15:28:52.605265    8214 main.go:141] libmachine: Parsing certificate...
	I1211 15:28:52.605303    8214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:28:52.605334    8214 main.go:141] libmachine: Decoding PEM data...
	I1211 15:28:52.605343    8214 main.go:141] libmachine: Parsing certificate...
	I1211 15:28:52.605817    8214 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:28:52.766580    8214 main.go:141] libmachine: Creating SSH key...
	I1211 15:28:52.912297    8214 main.go:141] libmachine: Creating Disk image...
	I1211 15:28:52.912303    8214 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:28:52.912534    8214 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2
	I1211 15:28:52.922957    8214 main.go:141] libmachine: STDOUT: 
	I1211 15:28:52.922981    8214 main.go:141] libmachine: STDERR: 
	I1211 15:28:52.923036    8214 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2 +20000M
	I1211 15:28:52.931731    8214 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:28:52.931750    8214 main.go:141] libmachine: STDERR: 
	I1211 15:28:52.931762    8214 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2
	I1211 15:28:52.931766    8214 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:28:52.931778    8214 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:28:52.931820    8214 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:27:1f:4d:3e:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2
	I1211 15:28:52.933631    8214 main.go:141] libmachine: STDOUT: 
	I1211 15:28:52.933646    8214 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:28:52.933667    8214 client.go:171] duration metric: took 328.529084ms to LocalClient.Create
	I1211 15:28:54.935830    8214 start.go:128] duration metric: took 2.354108042s to createHost
	I1211 15:28:54.935942    8214 start.go:83] releasing machines lock for "multinode-921000", held for 2.354239708s
	W1211 15:28:54.936017    8214 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:28:54.948373    8214 out.go:177] * Deleting "multinode-921000" in qemu2 ...
	W1211 15:28:54.979290    8214 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:28:54.979310    8214 start.go:729] Will try again in 5 seconds ...
	I1211 15:28:59.981487    8214 start.go:360] acquireMachinesLock for multinode-921000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:28:59.982089    8214 start.go:364] duration metric: took 508µs to acquireMachinesLock for "multinode-921000"
	I1211 15:28:59.982211    8214 start.go:93] Provisioning new machine with config: &{Name:multinode-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:28:59.982426    8214 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:29:00.000034    8214 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:29:00.038025    8214 start.go:159] libmachine.API.Create for "multinode-921000" (driver="qemu2")
	I1211 15:29:00.038087    8214 client.go:168] LocalClient.Create starting
	I1211 15:29:00.038260    8214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:29:00.038384    8214 main.go:141] libmachine: Decoding PEM data...
	I1211 15:29:00.038405    8214 main.go:141] libmachine: Parsing certificate...
	I1211 15:29:00.038493    8214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:29:00.038596    8214 main.go:141] libmachine: Decoding PEM data...
	I1211 15:29:00.038614    8214 main.go:141] libmachine: Parsing certificate...
	I1211 15:29:00.039540    8214 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:29:00.207973    8214 main.go:141] libmachine: Creating SSH key...
	I1211 15:29:00.286757    8214 main.go:141] libmachine: Creating Disk image...
	I1211 15:29:00.286763    8214 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:29:00.287008    8214 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2
	I1211 15:29:00.296922    8214 main.go:141] libmachine: STDOUT: 
	I1211 15:29:00.296943    8214 main.go:141] libmachine: STDERR: 
	I1211 15:29:00.297016    8214 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2 +20000M
	I1211 15:29:00.305388    8214 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:29:00.305411    8214 main.go:141] libmachine: STDERR: 
	I1211 15:29:00.305422    8214 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2
	I1211 15:29:00.305427    8214 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:29:00.305436    8214 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:29:00.305464    8214 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:3c:60:53:ca:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2
	I1211 15:29:00.307318    8214 main.go:141] libmachine: STDOUT: 
	I1211 15:29:00.307331    8214 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:29:00.307345    8214 client.go:171] duration metric: took 269.252209ms to LocalClient.Create
	I1211 15:29:02.309492    8214 start.go:128] duration metric: took 2.327064541s to createHost
	I1211 15:29:02.309553    8214 start.go:83] releasing machines lock for "multinode-921000", held for 2.327468125s
	W1211 15:29:02.309918    8214 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:29:02.323598    8214 out.go:201] 
	W1211 15:29:02.328740    8214 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:29:02.328780    8214 out.go:270] * 
	* 
	W1211 15:29:02.331257    8214 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:29:02.345514    8214 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-921000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (75.4295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (88.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (64.147959ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-921000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- rollout status deployment/busybox: exit status 1 (62.194166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (63.0445ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:29:02.626209    7135 retry.go:31] will retry after 1.127025027s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.384958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:29:03.863970    7135 retry.go:31] will retry after 2.023410305s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.553833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:29:05.998277    7135 retry.go:31] will retry after 2.539203916s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.188667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:29:08.647953    7135 retry.go:31] will retry after 4.693112481s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.17675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:29:13.452712    7135 retry.go:31] will retry after 5.391098325s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.076584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:29:18.957308    7135 retry.go:31] will retry after 6.79918588s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.374708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:29:25.868183    7135 retry.go:31] will retry after 15.960526762s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.666917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:29:41.937904    7135 retry.go:31] will retry after 24.958056071s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.654542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1211 15:30:07.005794    7135 retry.go:31] will retry after 23.369279526s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.167125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.109917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.546958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.719291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.536291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (34.655042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (88.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-921000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (64.504667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (34.966625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-921000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-921000 -v 3 --alsologtostderr: exit status 83 (49.897542ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-921000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-921000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:30:30.904495    8601 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:30:30.904702    8601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:30.904706    8601 out.go:358] Setting ErrFile to fd 2...
	I1211 15:30:30.904708    8601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:30.904841    8601 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:30:30.905090    8601 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:30:30.905306    8601 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:30:30.911076    8601 out.go:177] * The control-plane node multinode-921000 host is not running: state=Stopped
	I1211 15:30:30.916058    8601 out.go:177]   To start a cluster, run: "minikube start -p multinode-921000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-921000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (34.377708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-921000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-921000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.116333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-921000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-921000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-921000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (34.460958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-921000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-921000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-921000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"multinode-921000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (34.887542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status --output json --alsologtostderr: exit status 7 (34.344917ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-921000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:30:31.138992    8613 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:30:31.139174    8613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:31.139178    8613 out.go:358] Setting ErrFile to fd 2...
	I1211 15:30:31.139180    8613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:31.139318    8613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:30:31.139444    8613 out.go:352] Setting JSON to true
	I1211 15:30:31.139454    8613 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:30:31.139518    8613 notify.go:220] Checking for updates...
	I1211 15:30:31.139668    8613 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:30:31.139676    8613 status.go:174] checking status of multinode-921000 ...
	I1211 15:30:31.139925    8613 status.go:371] multinode-921000 host status = "Stopped" (err=<nil>)
	I1211 15:30:31.139928    8613 status.go:384] host is not running, skipping remaining checks
	I1211 15:30:31.139930    8613 status.go:176] multinode-921000 status: &{Name:multinode-921000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-921000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (34.247208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 node stop m03: exit status 85 (51.385ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-921000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status: exit status 7 (34.577792ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status --alsologtostderr: exit status 7 (34.876208ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:30:31.294973    8621 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:30:31.295151    8621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:31.295154    8621 out.go:358] Setting ErrFile to fd 2...
	I1211 15:30:31.295157    8621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:31.295310    8621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:30:31.295428    8621 out.go:352] Setting JSON to false
	I1211 15:30:31.295438    8621 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:30:31.295489    8621 notify.go:220] Checking for updates...
	I1211 15:30:31.295642    8621 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:30:31.295651    8621 status.go:174] checking status of multinode-921000 ...
	I1211 15:30:31.295899    8621 status.go:371] multinode-921000 host status = "Stopped" (err=<nil>)
	I1211 15:30:31.295902    8621 status.go:384] host is not running, skipping remaining checks
	I1211 15:30:31.295904    8621 status.go:176] multinode-921000 status: &{Name:multinode-921000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-921000 status --alsologtostderr": multinode-921000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (34.709584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (47.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 node start m03 -v=7 --alsologtostderr: exit status 85 (52.315875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:30:31.364803    8625 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:30:31.365231    8625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:31.365235    8625 out.go:358] Setting ErrFile to fd 2...
	I1211 15:30:31.365238    8625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:31.365417    8625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:30:31.365660    8625 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:30:31.365849    8625 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:30:31.369329    8625 out.go:201] 
	W1211 15:30:31.373081    8625 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1211 15:30:31.373086    8625 out.go:270] * 
	* 
	W1211 15:30:31.374779    8625 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:30:31.379244    8625 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1211 15:30:31.364803    8625 out.go:345] Setting OutFile to fd 1 ...
I1211 15:30:31.365231    8625 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:30:31.365235    8625 out.go:358] Setting ErrFile to fd 2...
I1211 15:30:31.365238    8625 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 15:30:31.365417    8625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
I1211 15:30:31.365660    8625 mustload.go:65] Loading cluster: multinode-921000
I1211 15:30:31.365849    8625 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1211 15:30:31.369329    8625 out.go:201] 
W1211 15:30:31.373081    8625 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1211 15:30:31.373086    8625 out.go:270] * 
* 
W1211 15:30:31.374779    8625 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1211 15:30:31.379244    8625 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-921000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr: exit status 7 (36.574417ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:30:31.419062    8627 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:30:31.419247    8627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:31.419250    8627 out.go:358] Setting ErrFile to fd 2...
	I1211 15:30:31.419252    8627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:31.419366    8627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:30:31.419497    8627 out.go:352] Setting JSON to false
	I1211 15:30:31.419507    8627 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:30:31.419556    8627 notify.go:220] Checking for updates...
	I1211 15:30:31.419713    8627 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:30:31.419722    8627 status.go:174] checking status of multinode-921000 ...
	I1211 15:30:31.419967    8627 status.go:371] multinode-921000 host status = "Stopped" (err=<nil>)
	I1211 15:30:31.419970    8627 status.go:384] host is not running, skipping remaining checks
	I1211 15:30:31.419972    8627 status.go:176] multinode-921000 status: &{Name:multinode-921000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:30:31.420850    7135 retry.go:31] will retry after 862.99753ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr: exit status 7 (79.826958ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:30:32.363802    8629 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:30:32.364034    8629 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:32.364039    8629 out.go:358] Setting ErrFile to fd 2...
	I1211 15:30:32.364042    8629 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:32.364236    8629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:30:32.364396    8629 out.go:352] Setting JSON to false
	I1211 15:30:32.364409    8629 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:30:32.364444    8629 notify.go:220] Checking for updates...
	I1211 15:30:32.364671    8629 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:30:32.364682    8629 status.go:174] checking status of multinode-921000 ...
	I1211 15:30:32.364990    8629 status.go:371] multinode-921000 host status = "Stopped" (err=<nil>)
	I1211 15:30:32.364994    8629 status.go:384] host is not running, skipping remaining checks
	I1211 15:30:32.364997    8629 status.go:176] multinode-921000 status: &{Name:multinode-921000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:30:32.366031    7135 retry.go:31] will retry after 1.291965552s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr: exit status 7 (77.840958ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:30:33.735909    8631 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:30:33.736131    8631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:33.736134    8631 out.go:358] Setting ErrFile to fd 2...
	I1211 15:30:33.736138    8631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:33.736327    8631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:30:33.736489    8631 out.go:352] Setting JSON to false
	I1211 15:30:33.736503    8631 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:30:33.736545    8631 notify.go:220] Checking for updates...
	I1211 15:30:33.736786    8631 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:30:33.736796    8631 status.go:174] checking status of multinode-921000 ...
	I1211 15:30:33.737113    8631 status.go:371] multinode-921000 host status = "Stopped" (err=<nil>)
	I1211 15:30:33.737118    8631 status.go:384] host is not running, skipping remaining checks
	I1211 15:30:33.737121    8631 status.go:176] multinode-921000 status: &{Name:multinode-921000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:30:33.738209    7135 retry.go:31] will retry after 1.547547366s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr: exit status 7 (78.437125ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:30:35.364248    8633 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:30:35.364461    8633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:35.364465    8633 out.go:358] Setting ErrFile to fd 2...
	I1211 15:30:35.364468    8633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:35.364666    8633 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:30:35.364834    8633 out.go:352] Setting JSON to false
	I1211 15:30:35.364846    8633 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:30:35.364880    8633 notify.go:220] Checking for updates...
	I1211 15:30:35.365131    8633 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:30:35.365141    8633 status.go:174] checking status of multinode-921000 ...
	I1211 15:30:35.365463    8633 status.go:371] multinode-921000 host status = "Stopped" (err=<nil>)
	I1211 15:30:35.365467    8633 status.go:384] host is not running, skipping remaining checks
	I1211 15:30:35.365470    8633 status.go:176] multinode-921000 status: &{Name:multinode-921000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:30:35.366517    7135 retry.go:31] will retry after 3.753084263s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr: exit status 7 (79.950458ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:30:39.199703    8636 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:30:39.199939    8636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:39.199943    8636 out.go:358] Setting ErrFile to fd 2...
	I1211 15:30:39.199946    8636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:39.200095    8636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:30:39.200257    8636 out.go:352] Setting JSON to false
	I1211 15:30:39.200270    8636 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:30:39.200302    8636 notify.go:220] Checking for updates...
	I1211 15:30:39.200536    8636 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:30:39.200545    8636 status.go:174] checking status of multinode-921000 ...
	I1211 15:30:39.200856    8636 status.go:371] multinode-921000 host status = "Stopped" (err=<nil>)
	I1211 15:30:39.200860    8636 status.go:384] host is not running, skipping remaining checks
	I1211 15:30:39.200863    8636 status.go:176] multinode-921000 status: &{Name:multinode-921000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:30:39.201909    7135 retry.go:31] will retry after 5.829474156s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr: exit status 7 (77.544167ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:30:45.108874    8642 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:30:45.109090    8642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:45.109095    8642 out.go:358] Setting ErrFile to fd 2...
	I1211 15:30:45.109098    8642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:45.109293    8642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:30:45.109466    8642 out.go:352] Setting JSON to false
	I1211 15:30:45.109479    8642 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:30:45.109552    8642 notify.go:220] Checking for updates...
	I1211 15:30:45.109827    8642 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:30:45.109837    8642 status.go:174] checking status of multinode-921000 ...
	I1211 15:30:45.110165    8642 status.go:371] multinode-921000 host status = "Stopped" (err=<nil>)
	I1211 15:30:45.110170    8642 status.go:384] host is not running, skipping remaining checks
	I1211 15:30:45.110173    8642 status.go:176] multinode-921000 status: &{Name:multinode-921000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:30:45.111232    7135 retry.go:31] will retry after 4.277647661s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr: exit status 7 (78.640834ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:30:49.467547    8644 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:30:49.467765    8644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:49.467769    8644 out.go:358] Setting ErrFile to fd 2...
	I1211 15:30:49.467773    8644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:30:49.467950    8644 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:30:49.468130    8644 out.go:352] Setting JSON to false
	I1211 15:30:49.468143    8644 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:30:49.468180    8644 notify.go:220] Checking for updates...
	I1211 15:30:49.468419    8644 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:30:49.468429    8644 status.go:174] checking status of multinode-921000 ...
	I1211 15:30:49.468779    8644 status.go:371] multinode-921000 host status = "Stopped" (err=<nil>)
	I1211 15:30:49.468784    8644 status.go:384] host is not running, skipping remaining checks
	I1211 15:30:49.468786    8644 status.go:176] multinode-921000 status: &{Name:multinode-921000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:30:49.469855    7135 retry.go:31] will retry after 16.243163228s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr: exit status 7 (79.446208ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:31:05.791316    8658 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:31:05.791571    8658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:05.791576    8658 out.go:358] Setting ErrFile to fd 2...
	I1211 15:31:05.791579    8658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:05.791768    8658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:31:05.791971    8658 out.go:352] Setting JSON to false
	I1211 15:31:05.791985    8658 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:31:05.792028    8658 notify.go:220] Checking for updates...
	I1211 15:31:05.792972    8658 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:31:05.793041    8658 status.go:174] checking status of multinode-921000 ...
	I1211 15:31:05.793563    8658 status.go:371] multinode-921000 host status = "Stopped" (err=<nil>)
	I1211 15:31:05.793570    8658 status.go:384] host is not running, skipping remaining checks
	I1211 15:31:05.793572    8658 status.go:176] multinode-921000 status: &{Name:multinode-921000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1211 15:31:05.794840    7135 retry.go:31] will retry after 13.295631893s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr: exit status 7 (78.822291ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:31:19.168839    8662 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:31:19.169096    8662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:19.169100    8662 out.go:358] Setting ErrFile to fd 2...
	I1211 15:31:19.169103    8662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:19.169286    8662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:31:19.169444    8662 out.go:352] Setting JSON to false
	I1211 15:31:19.169457    8662 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:31:19.169495    8662 notify.go:220] Checking for updates...
	I1211 15:31:19.169755    8662 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:31:19.169767    8662 status.go:174] checking status of multinode-921000 ...
	I1211 15:31:19.170078    8662 status.go:371] multinode-921000 host status = "Stopped" (err=<nil>)
	I1211 15:31:19.170083    8662 status.go:384] host is not running, skipping remaining checks
	I1211 15:31:19.170086    8662 status.go:176] multinode-921000 status: &{Name:multinode-921000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-921000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (35.918875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (47.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-921000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-921000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-921000: (3.779130042s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-921000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-921000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.235172583s)

                                                
                                                
-- stdout --
	* [multinode-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-921000" primary control-plane node in "multinode-921000" cluster
	* Restarting existing qemu2 VM for "multinode-921000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-921000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:31:23.093080    8691 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:31:23.093278    8691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:23.093283    8691 out.go:358] Setting ErrFile to fd 2...
	I1211 15:31:23.093286    8691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:23.093454    8691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:31:23.094743    8691 out.go:352] Setting JSON to false
	I1211 15:31:23.115990    8691 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5453,"bootTime":1733954430,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:31:23.116063    8691 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:31:23.120528    8691 out.go:177] * [multinode-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:31:23.129429    8691 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:31:23.129527    8691 notify.go:220] Checking for updates...
	I1211 15:31:23.137403    8691 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:31:23.140422    8691 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:31:23.143467    8691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:31:23.146441    8691 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:31:23.149422    8691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:31:23.152749    8691 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:31:23.152801    8691 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:31:23.156349    8691 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:31:23.163453    8691 start.go:297] selected driver: qemu2
	I1211 15:31:23.163459    8691 start.go:901] validating driver "qemu2" against &{Name:multinode-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:31:23.163527    8691 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:31:23.166114    8691 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:31:23.166139    8691 cni.go:84] Creating CNI manager for ""
	I1211 15:31:23.166167    8691 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1211 15:31:23.166210    8691 start.go:340] cluster config:
	{Name:multinode-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-921000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:31:23.171095    8691 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:31:23.179460    8691 out.go:177] * Starting "multinode-921000" primary control-plane node in "multinode-921000" cluster
	I1211 15:31:23.183477    8691 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:31:23.183493    8691 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:31:23.183499    8691 cache.go:56] Caching tarball of preloaded images
	I1211 15:31:23.183584    8691 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:31:23.183602    8691 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:31:23.183653    8691 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/multinode-921000/config.json ...
	I1211 15:31:23.184124    8691 start.go:360] acquireMachinesLock for multinode-921000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:31:23.184179    8691 start.go:364] duration metric: took 49.458µs to acquireMachinesLock for "multinode-921000"
	I1211 15:31:23.184188    8691 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:31:23.184192    8691 fix.go:54] fixHost starting: 
	I1211 15:31:23.184310    8691 fix.go:112] recreateIfNeeded on multinode-921000: state=Stopped err=<nil>
	W1211 15:31:23.184317    8691 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:31:23.187482    8691 out.go:177] * Restarting existing qemu2 VM for "multinode-921000" ...
	I1211 15:31:23.195391    8691 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:31:23.195436    8691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:3c:60:53:ca:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2
	I1211 15:31:23.197811    8691 main.go:141] libmachine: STDOUT: 
	I1211 15:31:23.197832    8691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:31:23.197864    8691 fix.go:56] duration metric: took 13.667875ms for fixHost
	I1211 15:31:23.197870    8691 start.go:83] releasing machines lock for "multinode-921000", held for 13.686417ms
	W1211 15:31:23.197877    8691 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:31:23.197910    8691 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:31:23.197915    8691 start.go:729] Will try again in 5 seconds ...
	I1211 15:31:28.200103    8691 start.go:360] acquireMachinesLock for multinode-921000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:31:28.200523    8691 start.go:364] duration metric: took 322.375µs to acquireMachinesLock for "multinode-921000"
	I1211 15:31:28.200682    8691 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:31:28.200704    8691 fix.go:54] fixHost starting: 
	I1211 15:31:28.201443    8691 fix.go:112] recreateIfNeeded on multinode-921000: state=Stopped err=<nil>
	W1211 15:31:28.201469    8691 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:31:28.206048    8691 out.go:177] * Restarting existing qemu2 VM for "multinode-921000" ...
	I1211 15:31:28.210022    8691 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:31:28.210249    8691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:3c:60:53:ca:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2
	I1211 15:31:28.220734    8691 main.go:141] libmachine: STDOUT: 
	I1211 15:31:28.220810    8691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:31:28.220910    8691 fix.go:56] duration metric: took 20.204958ms for fixHost
	I1211 15:31:28.220934    8691 start.go:83] releasing machines lock for "multinode-921000", held for 20.3875ms
	W1211 15:31:28.221109    8691 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-921000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-921000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:31:28.230027    8691 out.go:201] 
	W1211 15:31:28.233995    8691 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:31:28.234032    8691 out.go:270] * 
	* 
	W1211 15:31:28.236583    8691 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:31:28.244003    8691 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-921000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-921000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (36.957875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 node delete m03: exit status 83 (41.417708ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-921000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-921000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-921000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status --alsologtostderr: exit status 7 (34.411625ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:31:28.445171    8708 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:31:28.445368    8708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:28.445372    8708 out.go:358] Setting ErrFile to fd 2...
	I1211 15:31:28.445374    8708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:28.445516    8708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:31:28.445648    8708 out.go:352] Setting JSON to false
	I1211 15:31:28.445659    8708 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:31:28.445702    8708 notify.go:220] Checking for updates...
	I1211 15:31:28.445873    8708 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:31:28.445882    8708 status.go:174] checking status of multinode-921000 ...
	I1211 15:31:28.446134    8708 status.go:371] multinode-921000 host status = "Stopped" (err=<nil>)
	I1211 15:31:28.446138    8708 status.go:384] host is not running, skipping remaining checks
	I1211 15:31:28.446140    8708 status.go:176] multinode-921000 status: &{Name:multinode-921000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-921000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (35.013625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (4.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-921000 stop: (3.886045667s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status: exit status 7 (72.288416ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-921000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-921000 status --alsologtostderr: exit status 7 (36.713583ms)

                                                
                                                
-- stdout --
	multinode-921000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:31:32.475815    8735 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:31:32.476005    8735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:32.476008    8735 out.go:358] Setting ErrFile to fd 2...
	I1211 15:31:32.476011    8735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:32.476158    8735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:31:32.476287    8735 out.go:352] Setting JSON to false
	I1211 15:31:32.476296    8735 mustload.go:65] Loading cluster: multinode-921000
	I1211 15:31:32.476363    8735 notify.go:220] Checking for updates...
	I1211 15:31:32.476509    8735 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:31:32.476518    8735 status.go:174] checking status of multinode-921000 ...
	I1211 15:31:32.476775    8735 status.go:371] multinode-921000 host status = "Stopped" (err=<nil>)
	I1211 15:31:32.476779    8735 status.go:384] host is not running, skipping remaining checks
	I1211 15:31:32.476781    8735 status.go:176] multinode-921000 status: &{Name:multinode-921000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-921000 status --alsologtostderr": multinode-921000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-921000 status --alsologtostderr": multinode-921000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (34.461542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (4.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-921000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-921000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.1871475s)

                                                
                                                
-- stdout --
	* [multinode-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-921000" primary control-plane node in "multinode-921000" cluster
	* Restarting existing qemu2 VM for "multinode-921000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-921000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:31:32.544881    8739 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:31:32.545063    8739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:32.545067    8739 out.go:358] Setting ErrFile to fd 2...
	I1211 15:31:32.545070    8739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:32.545197    8739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:31:32.546288    8739 out.go:352] Setting JSON to false
	I1211 15:31:32.564099    8739 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5462,"bootTime":1733954430,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:31:32.564172    8739 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:31:32.568894    8739 out.go:177] * [multinode-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:31:32.576636    8739 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:31:32.576699    8739 notify.go:220] Checking for updates...
	I1211 15:31:32.582786    8739 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:31:32.584147    8739 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:31:32.586795    8739 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:31:32.589833    8739 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:31:32.592795    8739 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:31:32.596037    8739 config.go:182] Loaded profile config "multinode-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:31:32.596330    8739 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:31:32.599787    8739 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:31:32.606778    8739 start.go:297] selected driver: qemu2
	I1211 15:31:32.606784    8739 start.go:901] validating driver "qemu2" against &{Name:multinode-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:31:32.606845    8739 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:31:32.609260    8739 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:31:32.609284    8739 cni.go:84] Creating CNI manager for ""
	I1211 15:31:32.609305    8739 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1211 15:31:32.609350    8739 start.go:340] cluster config:
	{Name:multinode-921000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-921000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:31:32.613665    8739 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:31:32.621719    8739 out.go:177] * Starting "multinode-921000" primary control-plane node in "multinode-921000" cluster
	I1211 15:31:32.625688    8739 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:31:32.625701    8739 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:31:32.625704    8739 cache.go:56] Caching tarball of preloaded images
	I1211 15:31:32.625769    8739 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:31:32.625774    8739 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:31:32.625825    8739 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/multinode-921000/config.json ...
	I1211 15:31:32.626312    8739 start.go:360] acquireMachinesLock for multinode-921000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:31:32.626360    8739 start.go:364] duration metric: took 41.375µs to acquireMachinesLock for "multinode-921000"
	I1211 15:31:32.626368    8739 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:31:32.626372    8739 fix.go:54] fixHost starting: 
	I1211 15:31:32.626485    8739 fix.go:112] recreateIfNeeded on multinode-921000: state=Stopped err=<nil>
	W1211 15:31:32.626492    8739 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:31:32.633783    8739 out.go:177] * Restarting existing qemu2 VM for "multinode-921000" ...
	I1211 15:31:32.637786    8739 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:31:32.637835    8739 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:3c:60:53:ca:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2
	I1211 15:31:32.640009    8739 main.go:141] libmachine: STDOUT: 
	I1211 15:31:32.640026    8739 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:31:32.640056    8739 fix.go:56] duration metric: took 13.680708ms for fixHost
	I1211 15:31:32.640062    8739 start.go:83] releasing machines lock for "multinode-921000", held for 13.697917ms
	W1211 15:31:32.640068    8739 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:31:32.640118    8739 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:31:32.640122    8739 start.go:729] Will try again in 5 seconds ...
	I1211 15:31:37.642279    8739 start.go:360] acquireMachinesLock for multinode-921000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:31:37.642748    8739 start.go:364] duration metric: took 372.584µs to acquireMachinesLock for "multinode-921000"
	I1211 15:31:37.642857    8739 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:31:37.642875    8739 fix.go:54] fixHost starting: 
	I1211 15:31:37.643561    8739 fix.go:112] recreateIfNeeded on multinode-921000: state=Stopped err=<nil>
	W1211 15:31:37.643587    8739 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:31:37.648017    8739 out.go:177] * Restarting existing qemu2 VM for "multinode-921000" ...
	I1211 15:31:37.652997    8739 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:31:37.653218    8739 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:3c:60:53:ca:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/multinode-921000/disk.qcow2
	I1211 15:31:37.662896    8739 main.go:141] libmachine: STDOUT: 
	I1211 15:31:37.662968    8739 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:31:37.663049    8739 fix.go:56] duration metric: took 20.171917ms for fixHost
	I1211 15:31:37.663070    8739 start.go:83] releasing machines lock for "multinode-921000", held for 20.300084ms
	W1211 15:31:37.663251    8739 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-921000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-921000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:31:37.671980    8739 out.go:201] 
	W1211 15:31:37.675984    8739 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:31:37.676007    8739 out.go:270] * 
	* 
	W1211 15:31:37.678860    8739 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:31:37.686958    8739 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-921000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (73.797666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-921000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-921000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-921000-m01 --driver=qemu2 : exit status 80 (9.818745375s)

                                                
                                                
-- stdout --
	* [multinode-921000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-921000-m01" primary control-plane node in "multinode-921000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-921000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-921000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-921000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-921000-m02 --driver=qemu2 : exit status 80 (9.981803708s)

                                                
                                                
-- stdout --
	* [multinode-921000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-921000-m02" primary control-plane node in "multinode-921000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-921000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-921000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-921000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-921000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-921000: exit status 83 (87.545541ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-921000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-921000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-921000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-921000 -n multinode-921000: exit status 7 (35.044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.05s)

                                                
                                    
x
+
TestPreload (10.18s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-818000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-818000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.011932s)

                                                
                                                
-- stdout --
	* [test-preload-818000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-818000" primary control-plane node in "test-preload-818000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-818000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:31:57.976782    8795 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:31:57.976946    8795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:57.976949    8795 out.go:358] Setting ErrFile to fd 2...
	I1211 15:31:57.976951    8795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:31:57.977088    8795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:31:57.978210    8795 out.go:352] Setting JSON to false
	I1211 15:31:57.995912    8795 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5487,"bootTime":1733954430,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:31:57.995986    8795 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:31:58.002781    8795 out.go:177] * [test-preload-818000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:31:58.010834    8795 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:31:58.010891    8795 notify.go:220] Checking for updates...
	I1211 15:31:58.018806    8795 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:31:58.020347    8795 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:31:58.024744    8795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:31:58.027746    8795 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:31:58.029239    8795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:31:58.033164    8795 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:31:58.033216    8795 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:31:58.037743    8795 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:31:58.043734    8795 start.go:297] selected driver: qemu2
	I1211 15:31:58.043739    8795 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:31:58.043744    8795 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:31:58.046339    8795 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:31:58.050798    8795 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:31:58.052456    8795 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:31:58.052487    8795 cni.go:84] Creating CNI manager for ""
	I1211 15:31:58.052510    8795 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:31:58.052515    8795 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:31:58.052547    8795 start.go:340] cluster config:
	{Name:test-preload-818000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-818000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:31:58.057187    8795 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:31:58.065752    8795 out.go:177] * Starting "test-preload-818000" primary control-plane node in "test-preload-818000" cluster
	I1211 15:31:58.069726    8795 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1211 15:31:58.069800    8795 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/test-preload-818000/config.json ...
	I1211 15:31:58.069818    8795 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/test-preload-818000/config.json: {Name:mkfde5d9492f23b7f8b156e97f89b976efd9ceb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:31:58.069830    8795 cache.go:107] acquiring lock: {Name:mk910ea2f5b7d6fd1b7647debef4ca198489547c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:31:58.069852    8795 cache.go:107] acquiring lock: {Name:mk23c038d99bebb377ab82e56fdb9fc623d0aa1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:31:58.069830    8795 cache.go:107] acquiring lock: {Name:mkc097e774b50d6e493e31a093813a0d5ca9f4c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:31:58.069984    8795 cache.go:107] acquiring lock: {Name:mk6f526552b7e58edeb3322564abf2ffe21870e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:31:58.070027    8795 cache.go:107] acquiring lock: {Name:mkaf9af8d304403cd167e49f40d554b8d00e1a14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:31:58.070066    8795 cache.go:107] acquiring lock: {Name:mk6bfa108c6044c6d56bbb25fd0b0b51927f7738 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:31:58.070105    8795 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1211 15:31:58.070117    8795 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1211 15:31:58.070111    8795 cache.go:107] acquiring lock: {Name:mk1c038843f94fc5385cc2c7e11728439486538d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:31:58.070163    8795 start.go:360] acquireMachinesLock for test-preload-818000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:31:58.070228    8795 cache.go:107] acquiring lock: {Name:mk7b5b7e484259752f7dd10bdf0a9ffd314e1d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:31:58.070464    8795 start.go:364] duration metric: took 290.959µs to acquireMachinesLock for "test-preload-818000"
	I1211 15:31:58.070477    8795 start.go:93] Provisioning new machine with config: &{Name:test-preload-818000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-818000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:31:58.070524    8795 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:31:58.070528    8795 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:31:58.070523    8795 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:31:58.070656    8795 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1211 15:31:58.070598    8795 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1211 15:31:58.074722    8795 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:31:58.075350    8795 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:31:58.075419    8795 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1211 15:31:58.082483    8795 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1211 15:31:58.082492    8795 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:31:58.082524    8795 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1211 15:31:58.082565    8795 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:31:58.084774    8795 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:31:58.084797    8795 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1211 15:31:58.084778    8795 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1211 15:31:58.084839    8795 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1211 15:31:58.092747    8795 start.go:159] libmachine.API.Create for "test-preload-818000" (driver="qemu2")
	I1211 15:31:58.092761    8795 client.go:168] LocalClient.Create starting
	I1211 15:31:58.092836    8795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:31:58.092873    8795 main.go:141] libmachine: Decoding PEM data...
	I1211 15:31:58.092886    8795 main.go:141] libmachine: Parsing certificate...
	I1211 15:31:58.092922    8795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:31:58.092954    8795 main.go:141] libmachine: Decoding PEM data...
	I1211 15:31:58.092964    8795 main.go:141] libmachine: Parsing certificate...
	I1211 15:31:58.093338    8795 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:31:58.262662    8795 main.go:141] libmachine: Creating SSH key...
	I1211 15:31:58.362221    8795 main.go:141] libmachine: Creating Disk image...
	I1211 15:31:58.362250    8795 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:31:58.362546    8795 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/disk.qcow2
	I1211 15:31:58.373243    8795 main.go:141] libmachine: STDOUT: 
	I1211 15:31:58.373282    8795 main.go:141] libmachine: STDERR: 
	I1211 15:31:58.373344    8795 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/disk.qcow2 +20000M
	I1211 15:31:58.382242    8795 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:31:58.382258    8795 main.go:141] libmachine: STDERR: 
	I1211 15:31:58.382276    8795 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/disk.qcow2
	I1211 15:31:58.382281    8795 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:31:58.382294    8795 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:31:58.382324    8795 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:4a:f3:77:09:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/disk.qcow2
	I1211 15:31:58.384410    8795 main.go:141] libmachine: STDOUT: 
	I1211 15:31:58.384425    8795 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:31:58.384444    8795 client.go:171] duration metric: took 291.680042ms to LocalClient.Create
	I1211 15:31:58.567140    8795 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1211 15:31:58.567447    8795 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1211 15:31:58.609179    8795 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1211 15:31:58.681822    8795 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1211 15:31:58.681867    8795 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1211 15:31:58.823084    8795 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1211 15:31:58.836472    8795 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1211 15:31:58.874630    8795 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1211 15:31:59.055921    8795 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1211 15:31:59.055966    8795 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 985.993125ms
	I1211 15:31:59.055998    8795 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1211 15:31:59.229197    8795 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1211 15:31:59.229291    8795 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1211 15:31:59.681860    8795 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1211 15:31:59.681921    8795 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.612113s
	I1211 15:31:59.681958    8795 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1211 15:32:00.384746    8795 start.go:128] duration metric: took 2.314174083s to createHost
	I1211 15:32:00.384802    8795 start.go:83] releasing machines lock for "test-preload-818000", held for 2.314358167s
	W1211 15:32:00.384847    8795 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:32:00.404396    8795 out.go:177] * Deleting "test-preload-818000" in qemu2 ...
	W1211 15:32:00.435136    8795 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:32:00.435163    8795 start.go:729] Will try again in 5 seconds ...
	I1211 15:32:00.901440    8795 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1211 15:32:00.901483    8795 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.83166175s
	I1211 15:32:00.901518    8795 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1211 15:32:01.531188    8795 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1211 15:32:01.531247    8795 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.461058292s
	I1211 15:32:01.531297    8795 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1211 15:32:03.251264    8795 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1211 15:32:03.251307    8795 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.181517833s
	I1211 15:32:03.251335    8795 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1211 15:32:03.353919    8795 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1211 15:32:03.353966    8795 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.284204417s
	I1211 15:32:03.353990    8795 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1211 15:32:05.437070    8795 start.go:360] acquireMachinesLock for test-preload-818000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:32:05.437489    8795 start.go:364] duration metric: took 363.416µs to acquireMachinesLock for "test-preload-818000"
	I1211 15:32:05.437577    8795 start.go:93] Provisioning new machine with config: &{Name:test-preload-818000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-818000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:32:05.437763    8795 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:32:05.447198    8795 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:32:05.495535    8795 start.go:159] libmachine.API.Create for "test-preload-818000" (driver="qemu2")
	I1211 15:32:05.495582    8795 client.go:168] LocalClient.Create starting
	I1211 15:32:05.495747    8795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:32:05.495839    8795 main.go:141] libmachine: Decoding PEM data...
	I1211 15:32:05.495861    8795 main.go:141] libmachine: Parsing certificate...
	I1211 15:32:05.495944    8795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:32:05.496001    8795 main.go:141] libmachine: Decoding PEM data...
	I1211 15:32:05.496018    8795 main.go:141] libmachine: Parsing certificate...
	I1211 15:32:05.496583    8795 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:32:05.531522    8795 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1211 15:32:05.531550    8795 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.461661416s
	I1211 15:32:05.531562    8795 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1211 15:32:05.669687    8795 main.go:141] libmachine: Creating SSH key...
	I1211 15:32:05.880388    8795 main.go:141] libmachine: Creating Disk image...
	I1211 15:32:05.880396    8795 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:32:05.880658    8795 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/disk.qcow2
	I1211 15:32:05.891205    8795 main.go:141] libmachine: STDOUT: 
	I1211 15:32:05.891219    8795 main.go:141] libmachine: STDERR: 
	I1211 15:32:05.891284    8795 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/disk.qcow2 +20000M
	I1211 15:32:05.899842    8795 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:32:05.899862    8795 main.go:141] libmachine: STDERR: 
	I1211 15:32:05.899873    8795 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/disk.qcow2
	I1211 15:32:05.899885    8795 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:32:05.899892    8795 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:32:05.899931    8795 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:3f:3b:9e:e8:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/test-preload-818000/disk.qcow2
	I1211 15:32:05.901895    8795 main.go:141] libmachine: STDOUT: 
	I1211 15:32:05.901911    8795 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:32:05.901924    8795 client.go:171] duration metric: took 406.340417ms to LocalClient.Create
	I1211 15:32:07.902442    8795 start.go:128] duration metric: took 2.464626375s to createHost
	I1211 15:32:07.902503    8795 start.go:83] releasing machines lock for "test-preload-818000", held for 2.465022s
	W1211 15:32:07.902733    8795 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-818000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-818000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:32:07.919446    8795 out.go:201] 
	W1211 15:32:07.923388    8795 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:32:07.923416    8795 out.go:270] * 
	* 
	W1211 15:32:07.926039    8795 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:32:07.940391    8795 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-818000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-12-11 15:32:07.958681 -0800 PST m=+637.744833335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-818000 -n test-preload-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-818000 -n test-preload-818000: exit status 7 (76.224417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-818000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-818000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-818000
--- FAIL: TestPreload (10.18s)

                                                
                                    
x
+
TestScheduledStopUnix (9.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-971000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-971000 --memory=2048 --driver=qemu2 : exit status 80 (9.78461925s)

                                                
                                                
-- stdout --
	* [scheduled-stop-971000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-971000" primary control-plane node in "scheduled-stop-971000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-971000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-971000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-971000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-971000" primary control-plane node in "scheduled-stop-971000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-971000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-971000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-12-11 15:32:17.906095 -0800 PST m=+647.692377043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-971000 -n scheduled-stop-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-971000 -n scheduled-stop-971000: exit status 7 (73.065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-971000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-971000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-971000
--- FAIL: TestScheduledStopUnix (9.94s)

                                                
                                    
x
+
TestSkaffold (12.54s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2864830637 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2864830637 version: (1.016508958s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-196000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-196000 --memory=2600 --driver=qemu2 : exit status 80 (9.955186083s)

                                                
                                                
-- stdout --
	* [skaffold-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-196000" primary control-plane node in "skaffold-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-196000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-196000" primary control-plane node in "skaffold-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-12-11 15:32:30.374499 -0800 PST m=+660.232771210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-196000 -n skaffold-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-196000 -n skaffold-196000: exit status 7 (66.081333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-196000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-196000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-196000
--- FAIL: TestSkaffold (12.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (622.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2883729195 start -p running-upgrade-031000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2883729195 start -p running-upgrade-031000 --memory=2200 --vm-driver=qemu2 : (1m0.085781834s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-031000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-031000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m47.7741245s)

                                                
                                                
-- stdout --
	* [running-upgrade-031000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-031000" primary control-plane node in "running-upgrade-031000" cluster
	* Updating the running qemu2 "running-upgrade-031000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:33:53.876307    9127 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:33:53.876479    9127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:33:53.876483    9127 out.go:358] Setting ErrFile to fd 2...
	I1211 15:33:53.876485    9127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:33:53.876612    9127 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:33:53.877626    9127 out.go:352] Setting JSON to false
	I1211 15:33:53.896002    9127 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5603,"bootTime":1733954430,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:33:53.896080    9127 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:33:53.899625    9127 out.go:177] * [running-upgrade-031000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:33:53.909543    9127 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:33:53.909626    9127 notify.go:220] Checking for updates...
	I1211 15:33:53.917484    9127 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:33:53.921514    9127 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:33:53.922651    9127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:33:53.925532    9127 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:33:53.928507    9127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:33:53.931865    9127 config.go:182] Loaded profile config "running-upgrade-031000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1211 15:33:53.934488    9127 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1211 15:33:53.937540    9127 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:33:53.940528    9127 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:33:53.947453    9127 start.go:297] selected driver: qemu2
	I1211 15:33:53.947457    9127 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61515 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1211 15:33:53.947501    9127 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:33:53.949921    9127 cni.go:84] Creating CNI manager for ""
	I1211 15:33:53.949962    9127 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:33:53.949995    9127 start.go:340] cluster config:
	{Name:running-upgrade-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61515 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1211 15:33:53.950042    9127 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:33:53.958463    9127 out.go:177] * Starting "running-upgrade-031000" primary control-plane node in "running-upgrade-031000" cluster
	I1211 15:33:53.962559    9127 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1211 15:33:53.962572    9127 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1211 15:33:53.962575    9127 cache.go:56] Caching tarball of preloaded images
	I1211 15:33:53.962635    9127 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:33:53.962641    9127 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1211 15:33:53.962685    9127 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/config.json ...
	I1211 15:33:53.963049    9127 start.go:360] acquireMachinesLock for running-upgrade-031000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:34:05.299124    9127 start.go:364] duration metric: took 11.336406334s to acquireMachinesLock for "running-upgrade-031000"
	I1211 15:34:05.299147    9127 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:34:05.299154    9127 fix.go:54] fixHost starting: 
	I1211 15:34:05.299891    9127 fix.go:112] recreateIfNeeded on running-upgrade-031000: state=Running err=<nil>
	W1211 15:34:05.299901    9127 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:34:05.303377    9127 out.go:177] * Updating the running qemu2 "running-upgrade-031000" VM ...
	I1211 15:34:05.311294    9127 machine.go:93] provisionDockerMachine start ...
	I1211 15:34:05.311396    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.311535    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.311540    9127 main.go:141] libmachine: About to run SSH command:
	hostname
	I1211 15:34:05.385465    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-031000
	
	I1211 15:34:05.385483    9127 buildroot.go:166] provisioning hostname "running-upgrade-031000"
	I1211 15:34:05.385533    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.385655    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.385662    9127 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-031000 && echo "running-upgrade-031000" | sudo tee /etc/hostname
	I1211 15:34:05.463277    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-031000
	
	I1211 15:34:05.463365    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.463500    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.463509    9127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-031000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-031000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-031000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 15:34:05.539562    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 15:34:05.539575    9127 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20083-6627/.minikube CaCertPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20083-6627/.minikube}
	I1211 15:34:05.539583    9127 buildroot.go:174] setting up certificates
	I1211 15:34:05.539601    9127 provision.go:84] configureAuth start
	I1211 15:34:05.539615    9127 provision.go:143] copyHostCerts
	I1211 15:34:05.539681    9127 exec_runner.go:144] found /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.pem, removing ...
	I1211 15:34:05.539689    9127 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.pem
	I1211 15:34:05.539807    9127 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.pem (1078 bytes)
	I1211 15:34:05.539998    9127 exec_runner.go:144] found /Users/jenkins/minikube-integration/20083-6627/.minikube/cert.pem, removing ...
	I1211 15:34:05.540002    9127 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20083-6627/.minikube/cert.pem
	I1211 15:34:05.540046    9127 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20083-6627/.minikube/cert.pem (1123 bytes)
	I1211 15:34:05.540160    9127 exec_runner.go:144] found /Users/jenkins/minikube-integration/20083-6627/.minikube/key.pem, removing ...
	I1211 15:34:05.540163    9127 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20083-6627/.minikube/key.pem
	I1211 15:34:05.540205    9127 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20083-6627/.minikube/key.pem (1675 bytes)
	I1211 15:34:05.540301    9127 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-031000 san=[127.0.0.1 localhost minikube running-upgrade-031000]
	I1211 15:34:05.575873    9127 provision.go:177] copyRemoteCerts
	I1211 15:34:05.575942    9127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 15:34:05.575954    9127 sshutil.go:53] new ssh client: &{IP:localhost Port:61422 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/id_rsa Username:docker}
	I1211 15:34:05.615112    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1211 15:34:05.622459    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1211 15:34:05.629640    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 15:34:05.636579    9127 provision.go:87] duration metric: took 96.965167ms to configureAuth
	I1211 15:34:05.636588    9127 buildroot.go:189] setting minikube options for container-runtime
	I1211 15:34:05.636696    9127 config.go:182] Loaded profile config "running-upgrade-031000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1211 15:34:05.636751    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.636842    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.636847    9127 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1211 15:34:05.710250    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1211 15:34:05.710262    9127 buildroot.go:70] root file system type: tmpfs
	I1211 15:34:05.710323    9127 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1211 15:34:05.710400    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.710523    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.710559    9127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1211 15:34:05.789720    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1211 15:34:05.789791    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.789902    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.789910    9127 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1211 15:34:05.882638    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 15:34:05.882651    9127 machine.go:96] duration metric: took 571.36825ms to provisionDockerMachine
	I1211 15:34:05.882658    9127 start.go:293] postStartSetup for "running-upgrade-031000" (driver="qemu2")
	I1211 15:34:05.882667    9127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 15:34:05.882734    9127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 15:34:05.882744    9127 sshutil.go:53] new ssh client: &{IP:localhost Port:61422 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/id_rsa Username:docker}
	I1211 15:34:05.924954    9127 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 15:34:05.926564    9127 info.go:137] Remote host: Buildroot 2021.02.12
	I1211 15:34:05.926571    9127 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20083-6627/.minikube/addons for local assets ...
	I1211 15:34:05.926653    9127 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20083-6627/.minikube/files for local assets ...
	I1211 15:34:05.926743    9127 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem -> 71352.pem in /etc/ssl/certs
	I1211 15:34:05.926847    9127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1211 15:34:05.929825    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem --> /etc/ssl/certs/71352.pem (1708 bytes)
	I1211 15:34:05.936800    9127 start.go:296] duration metric: took 54.138333ms for postStartSetup
	I1211 15:34:05.936813    9127 fix.go:56] duration metric: took 637.681875ms for fixHost
	I1211 15:34:05.936856    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.936957    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.936961    9127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1211 15:34:06.008288    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733960045.939916048
	
	I1211 15:34:06.008297    9127 fix.go:216] guest clock: 1733960045.939916048
	I1211 15:34:06.008301    9127 fix.go:229] Guest: 2024-12-11 15:34:05.939916048 -0800 PST Remote: 2024-12-11 15:34:05.936814 -0800 PST m=+12.085597709 (delta=3.102048ms)
	I1211 15:34:06.008314    9127 fix.go:200] guest clock delta is within tolerance: 3.102048ms
	I1211 15:34:06.008317    9127 start.go:83] releasing machines lock for "running-upgrade-031000", held for 709.200709ms
	I1211 15:34:06.008387    9127 ssh_runner.go:195] Run: cat /version.json
	I1211 15:34:06.008396    9127 sshutil.go:53] new ssh client: &{IP:localhost Port:61422 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/id_rsa Username:docker}
	I1211 15:34:06.008387    9127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 15:34:06.008432    9127 sshutil.go:53] new ssh client: &{IP:localhost Port:61422 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/id_rsa Username:docker}
	W1211 15:34:06.008912    9127 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:61660->127.0.0.1:61422: read: connection reset by peer
	I1211 15:34:06.008926    9127 retry.go:31] will retry after 223.672014ms: ssh: handshake failed: read tcp 127.0.0.1:61660->127.0.0.1:61422: read: connection reset by peer
	W1211 15:34:06.276633    9127 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1211 15:34:06.276706    9127 ssh_runner.go:195] Run: systemctl --version
	I1211 15:34:06.281581    9127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 15:34:06.283309    9127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 15:34:06.283354    9127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1211 15:34:06.287621    9127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1211 15:34:06.306921    9127 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 15:34:06.306940    9127 start.go:495] detecting cgroup driver to use...
	I1211 15:34:06.307004    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 15:34:06.317926    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1211 15:34:06.325573    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1211 15:34:06.344372    9127 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1211 15:34:06.344449    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1211 15:34:06.347463    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1211 15:34:06.350513    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1211 15:34:06.353664    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1211 15:34:06.358088    9127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 15:34:06.365139    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1211 15:34:06.368344    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1211 15:34:06.374010    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1211 15:34:06.377095    9127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 15:34:06.381766    9127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 15:34:06.386302    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:06.512009    9127 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1211 15:34:06.533413    9127 start.go:495] detecting cgroup driver to use...
	I1211 15:34:06.533497    9127 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1211 15:34:06.541438    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 15:34:06.547189    9127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 15:34:06.555669    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 15:34:06.572062    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1211 15:34:06.583794    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 15:34:06.594364    9127 ssh_runner.go:195] Run: which cri-dockerd
	I1211 15:34:06.595593    9127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1211 15:34:06.598149    9127 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1211 15:34:06.603022    9127 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1211 15:34:06.709547    9127 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1211 15:34:06.820002    9127 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1211 15:34:06.820065    9127 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1211 15:34:06.825247    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:06.926786    9127 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1211 15:34:23.249791    9127 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.323491167s)
	I1211 15:34:23.249869    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1211 15:34:23.254818    9127 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1211 15:34:23.262261    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1211 15:34:23.267411    9127 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1211 15:34:23.353075    9127 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1211 15:34:23.444963    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:23.536577    9127 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1211 15:34:23.543010    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1211 15:34:23.547387    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:23.641952    9127 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1211 15:34:23.682232    9127 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1211 15:34:23.682327    9127 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1211 15:34:23.684274    9127 start.go:563] Will wait 60s for crictl version
	I1211 15:34:23.684341    9127 ssh_runner.go:195] Run: which crictl
	I1211 15:34:23.685912    9127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 15:34:23.697909    9127 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1211 15:34:23.697995    9127 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1211 15:34:23.710591    9127 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1211 15:34:23.728589    9127 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1211 15:34:23.728749    9127 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1211 15:34:23.730203    9127 kubeadm.go:883] updating cluster {Name:running-upgrade-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61515 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1211 15:34:23.730246    9127 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1211 15:34:23.730296    9127 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1211 15:34:23.740974    9127 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1211 15:34:23.740982    9127 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1211 15:34:23.741041    9127 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1211 15:34:23.744096    9127 ssh_runner.go:195] Run: which lz4
	I1211 15:34:23.745506    9127 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 15:34:23.746800    9127 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 15:34:23.746809    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1211 15:34:24.737503    9127 docker.go:653] duration metric: took 992.084083ms to copy over tarball
	I1211 15:34:24.737578    9127 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 15:34:25.947140    9127 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.209586083s)
	I1211 15:34:25.947157    9127 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 15:34:25.964249    9127 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1211 15:34:25.967828    9127 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1211 15:34:25.972922    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:26.059020    9127 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1211 15:34:27.251243    9127 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.192237875s)
	I1211 15:34:27.251340    9127 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1211 15:34:27.264476    9127 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1211 15:34:27.264494    9127 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1211 15:34:27.264502    9127 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1211 15:34:27.268957    9127 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:27.271925    9127 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:27.274782    9127 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:27.274860    9127 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:27.277085    9127 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:27.277101    9127 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:27.278459    9127 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:27.279180    9127 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:27.280231    9127 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:27.280513    9127 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:27.281629    9127 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1211 15:34:27.282216    9127 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:27.283137    9127 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:27.283196    9127 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:27.284212    9127 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1211 15:34:27.285119    9127 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:27.869286    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:27.874704    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:27.876583    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:27.884294    9127 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1211 15:34:27.884331    9127 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:27.884380    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:27.896136    9127 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1211 15:34:27.896165    9127 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:27.896207    9127 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1211 15:34:27.896242    9127 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:27.896249    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:27.896276    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:27.904749    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1211 15:34:27.916650    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1211 15:34:27.916672    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1211 15:34:27.962800    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:27.974127    9127 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1211 15:34:27.974149    9127 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:27.974212    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:27.977032    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:27.987504    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1211 15:34:27.992008    9127 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1211 15:34:27.992029    9127 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:27.992082    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:28.002640    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1211 15:34:28.052306    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1211 15:34:28.062993    9127 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1211 15:34:28.063012    9127 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1211 15:34:28.063073    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1211 15:34:28.073442    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1211 15:34:28.073578    9127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1211 15:34:28.075378    9127 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1211 15:34:28.075391    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1211 15:34:28.083948    9127 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1211 15:34:28.083959    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1211 15:34:28.111525    9127 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1211 15:34:28.150601    9127 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1211 15:34:28.150758    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:28.161839    9127 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1211 15:34:28.161864    9127 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:28.161926    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:28.173740    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1211 15:34:28.173868    9127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1211 15:34:28.175773    9127 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1211 15:34:28.175784    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1211 15:34:28.222106    9127 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1211 15:34:28.222127    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W1211 15:34:28.245584    9127 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1211 15:34:28.245858    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:28.272548    9127 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1211 15:34:28.272598    9127 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1211 15:34:28.272620    9127 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:28.272676    9127 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:29.174200    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1211 15:34:29.174474    9127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1211 15:34:29.178542    9127 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1211 15:34:29.178594    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1211 15:34:29.228223    9127 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1211 15:34:29.228239    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1211 15:34:29.467625    9127 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1211 15:34:29.467663    9127 cache_images.go:92] duration metric: took 2.203221917s to LoadCachedImages
	W1211 15:34:29.467752    9127 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1211 15:34:29.467760    9127 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1211 15:34:29.467820    9127 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-031000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 15:34:29.467907    9127 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1211 15:34:29.481696    9127 cni.go:84] Creating CNI manager for ""
	I1211 15:34:29.481714    9127 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:34:29.481726    9127 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1211 15:34:29.481741    9127 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-031000 NodeName:running-upgrade-031000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 15:34:29.481824    9127 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-031000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 15:34:29.481893    9127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1211 15:34:29.485383    9127 binaries.go:44] Found k8s binaries, skipping transfer
	I1211 15:34:29.485420    9127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 15:34:29.488303    9127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1211 15:34:29.493283    9127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 15:34:29.498003    9127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1211 15:34:29.503680    9127 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1211 15:34:29.505425    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:29.592964    9127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 15:34:29.598609    9127 certs.go:68] Setting up /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000 for IP: 10.0.2.15
	I1211 15:34:29.598616    9127 certs.go:194] generating shared ca certs ...
	I1211 15:34:29.598625    9127 certs.go:226] acquiring lock for ca certs: {Name:mk9a2f9aee3b15a0ae3e213800d46f88db78207a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:29.598777    9127 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.key
	I1211 15:34:29.599100    9127 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/proxy-client-ca.key
	I1211 15:34:29.599106    9127 certs.go:256] generating profile certs ...
	I1211 15:34:29.599400    9127 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/client.key
	I1211 15:34:29.599418    9127 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.key.d73f31b6
	I1211 15:34:29.599427    9127 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.crt.d73f31b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1211 15:34:29.681554    9127 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.crt.d73f31b6 ...
	I1211 15:34:29.681565    9127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.crt.d73f31b6: {Name:mk94e27a7067bfbb2a635ef1c0f7e2a4c01f2256 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:29.681834    9127 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.key.d73f31b6 ...
	I1211 15:34:29.681839    9127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.key.d73f31b6: {Name:mk0a7a9ea9bc2778f3cc6c528fcf72f51e126b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:29.681990    9127 certs.go:381] copying /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.crt.d73f31b6 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.crt
	I1211 15:34:29.682110    9127 certs.go:385] copying /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.key.d73f31b6 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.key
	I1211 15:34:29.682439    9127 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/proxy-client.key
	I1211 15:34:29.682613    9127 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/7135.pem (1338 bytes)
	W1211 15:34:29.682802    9127 certs.go:480] ignoring /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/7135_empty.pem, impossibly tiny 0 bytes
	I1211 15:34:29.682808    9127 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 15:34:29.682977    9127 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem (1078 bytes)
	I1211 15:34:29.683159    9127 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem (1123 bytes)
	I1211 15:34:29.683352    9127 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/key.pem (1675 bytes)
	I1211 15:34:29.683524    9127 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem (1708 bytes)
	I1211 15:34:29.685551    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 15:34:29.693528    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 15:34:29.701235    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 15:34:29.710307    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1211 15:34:29.718527    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1211 15:34:29.725589    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1211 15:34:29.732852    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 15:34:29.739851    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1211 15:34:29.746529    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 15:34:29.754082    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/7135.pem --> /usr/share/ca-certificates/7135.pem (1338 bytes)
	I1211 15:34:29.761253    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem --> /usr/share/ca-certificates/71352.pem (1708 bytes)
	I1211 15:34:29.768507    9127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 15:34:29.773712    9127 ssh_runner.go:195] Run: openssl version
	I1211 15:34:29.775527    9127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71352.pem && ln -fs /usr/share/ca-certificates/71352.pem /etc/ssl/certs/71352.pem"
	I1211 15:34:29.779566    9127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71352.pem
	I1211 15:34:29.781302    9127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:22 /usr/share/ca-certificates/71352.pem
	I1211 15:34:29.781337    9127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71352.pem
	I1211 15:34:29.783372    9127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71352.pem /etc/ssl/certs/3ec20f2e.0"
	I1211 15:34:29.786141    9127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1211 15:34:29.789338    9127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 15:34:29.790843    9127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I1211 15:34:29.790870    9127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 15:34:29.792974    9127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1211 15:34:29.795575    9127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7135.pem && ln -fs /usr/share/ca-certificates/7135.pem /etc/ssl/certs/7135.pem"
	I1211 15:34:29.798925    9127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7135.pem
	I1211 15:34:29.800485    9127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:22 /usr/share/ca-certificates/7135.pem
	I1211 15:34:29.800515    9127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7135.pem
	I1211 15:34:29.802460    9127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7135.pem /etc/ssl/certs/51391683.0"
	I1211 15:34:29.805686    9127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 15:34:29.807313    9127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 15:34:29.809381    9127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 15:34:29.811396    9127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 15:34:29.813739    9127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 15:34:29.815995    9127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 15:34:29.817621    9127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 15:34:29.819357    9127 kubeadm.go:392] StartCluster: {Name:running-upgrade-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61515 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1211 15:34:29.819430    9127 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1211 15:34:29.836773    9127 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 15:34:29.840025    9127 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1211 15:34:29.840038    9127 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1211 15:34:29.840073    9127 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 15:34:29.843061    9127 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 15:34:29.843559    9127 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-031000" does not appear in /Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:34:29.843672    9127 kubeconfig.go:62] /Users/jenkins/minikube-integration/20083-6627/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-031000" cluster setting kubeconfig missing "running-upgrade-031000" context setting]
	I1211 15:34:29.843863    9127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/kubeconfig: {Name:mkbb4a262cd8684046b6244fd6ca1d80f2c17ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:29.844298    9127 kapi.go:59] client config for running-upgrade-031000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/client.key", CAFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1044bc0b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 15:34:29.844787    9127 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 15:34:29.848054    9127 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-031000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1211 15:34:29.848060    9127 kubeadm.go:1160] stopping kube-system containers ...
	I1211 15:34:29.848113    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1211 15:34:29.859366    9127 docker.go:483] Stopping containers: [6085b0e488b0 4b80b10abc15 1140a38c8ff2 deb6c6e8ccd5 54bb8dab6d62 c12e2ab1ed1d 14d75f9b9c9d a07f1fe8059c d34888fb8fe2 9156d239f005 a954fb185965 ebd7105b237d c6f7cfc4bc17 6be8bf310db2 6f0113ec40f2 1588ec1e49c6 eb06ed70196d 95038533cd6f]
	I1211 15:34:29.859446    9127 ssh_runner.go:195] Run: docker stop 6085b0e488b0 4b80b10abc15 1140a38c8ff2 deb6c6e8ccd5 54bb8dab6d62 c12e2ab1ed1d 14d75f9b9c9d a07f1fe8059c d34888fb8fe2 9156d239f005 a954fb185965 ebd7105b237d c6f7cfc4bc17 6be8bf310db2 6f0113ec40f2 1588ec1e49c6 eb06ed70196d 95038533cd6f
	I1211 15:34:29.871631    9127 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1211 15:34:29.964407    9127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 15:34:29.968304    9127 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Dec 11 23:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Dec 11 23:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec 11 23:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Dec 11 23:33 /etc/kubernetes/scheduler.conf
	
	I1211 15:34:29.968350    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/admin.conf
	I1211 15:34:29.971148    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 15:34:29.971184    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 15:34:29.974386    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/kubelet.conf
	I1211 15:34:29.977605    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 15:34:29.977640    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 15:34:29.980678    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/controller-manager.conf
	I1211 15:34:29.983440    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 15:34:29.983467    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 15:34:29.986561    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/scheduler.conf
	I1211 15:34:29.989681    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 15:34:29.989717    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 15:34:29.992329    9127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 15:34:29.995051    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:30.017364    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:30.491793    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:30.876377    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:30.902616    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:30.926655    9127 api_server.go:52] waiting for apiserver process to appear ...
	I1211 15:34:30.926746    9127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:34:31.426901    9127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:34:31.928828    9127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:34:31.933842    9127 api_server.go:72] duration metric: took 1.007221709s to wait for apiserver process to appear ...
	I1211 15:34:31.933853    9127 api_server.go:88] waiting for apiserver healthz status ...
	I1211 15:34:31.933863    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:36.933910    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:36.933950    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:41.935653    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:41.935697    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:46.935897    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:46.935925    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:51.936092    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:51.936137    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:56.936625    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:56.936697    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:01.937336    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:01.937390    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:06.938265    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:06.938337    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:11.939404    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:11.939446    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:16.940105    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:16.940147    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:21.941799    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:21.941860    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:26.942686    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:26.942746    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:31.944973    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:31.945176    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:31.962585    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:35:31.962680    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:31.974403    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:35:31.974481    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:31.984938    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:35:31.985014    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:31.995800    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:35:31.995863    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:32.007255    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:35:32.007328    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:32.018297    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:35:32.018375    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:32.028058    9127 logs.go:282] 0 containers: []
	W1211 15:35:32.028072    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:32.028143    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:32.038295    9127 logs.go:282] 0 containers: []
	W1211 15:35:32.038307    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:35:32.038312    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:32.038318    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:32.137601    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:35:32.137614    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:35:32.152051    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:35:32.152062    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:35:32.164071    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:32.164082    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:32.191742    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:35:32.191754    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:35:32.206511    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:35:32.206522    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:35:32.218433    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:35:32.218444    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:35:32.236538    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:35:32.236549    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:35:32.247637    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:35:32.247651    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:35:32.264794    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:35:32.264804    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:35:32.278701    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:32.278714    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:32.317579    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:35:32.317587    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:35:32.329268    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:35:32.329278    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:35:32.340905    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:35:32.340917    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:32.352441    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:32.352451    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:32.356947    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:35:32.356954    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:35:32.369723    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:35:32.369733    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:35:34.885961    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:39.888126    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:39.888346    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:39.902816    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:35:39.902914    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:39.914742    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:35:39.914823    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:39.925494    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:35:39.925585    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:39.935991    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:35:39.936072    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:39.946733    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:35:39.946814    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:39.957506    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:35:39.957583    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:39.967565    9127 logs.go:282] 0 containers: []
	W1211 15:35:39.967583    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:39.967652    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:39.977691    9127 logs.go:282] 0 containers: []
	W1211 15:35:39.977708    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:35:39.977713    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:35:39.977718    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:35:39.994564    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:39.994576    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:39.999373    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:35:39.999380    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:35:40.011117    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:35:40.011127    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:35:40.024199    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:35:40.024213    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:35:40.035598    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:40.035611    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:40.071789    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:35:40.071799    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:35:40.093446    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:35:40.093455    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:35:40.110645    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:35:40.110655    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:40.123585    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:35:40.123597    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:35:40.137810    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:35:40.137820    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:35:40.149558    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:35:40.149570    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:35:40.164007    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:35:40.164017    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:35:40.174863    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:35:40.174875    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:35:40.192707    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:40.192719    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:40.219554    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:40.219565    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:40.259227    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:35:40.259235    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:35:42.774568    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:47.776942    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:47.777142    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:47.793016    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:35:47.793119    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:47.806188    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:35:47.806275    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:47.817099    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:35:47.817175    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:47.827312    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:35:47.827389    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:47.837471    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:35:47.837543    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:47.849644    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:35:47.849724    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:47.859933    9127 logs.go:282] 0 containers: []
	W1211 15:35:47.859944    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:47.860033    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:47.871223    9127 logs.go:282] 0 containers: []
	W1211 15:35:47.871233    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:35:47.871238    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:35:47.871243    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:35:47.890076    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:35:47.890087    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:35:47.901809    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:47.901822    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:47.943936    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:47.943948    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:47.978918    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:35:47.978931    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:35:47.993193    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:35:47.993208    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:35:48.006910    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:35:48.006924    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:35:48.027119    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:35:48.027130    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:35:48.039044    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:35:48.039056    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:35:48.050729    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:35:48.050743    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:35:48.063144    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:35:48.063159    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:35:48.074515    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:35:48.074529    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:35:48.091319    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:35:48.091334    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:48.103780    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:48.103795    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:48.107981    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:35:48.107991    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:35:48.123218    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:35:48.123229    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:35:48.135670    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:48.135684    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:50.663696    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:55.665998    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:55.666238    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:55.690415    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:35:55.690550    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:55.710008    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:35:55.710104    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:55.740045    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:35:55.740132    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:55.751215    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:35:55.751301    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:55.761801    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:35:55.761882    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:55.772243    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:35:55.772324    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:55.782285    9127 logs.go:282] 0 containers: []
	W1211 15:35:55.782303    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:55.782370    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:55.792933    9127 logs.go:282] 0 containers: []
	W1211 15:35:55.792944    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:35:55.792950    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:35:55.792955    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:35:55.809033    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:35:55.809042    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:35:55.829603    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:35:55.829613    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:35:55.848614    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:35:55.848626    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:35:55.860006    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:55.860021    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:55.885319    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:55.885330    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:55.923945    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:35:55.923962    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:35:55.939072    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:35:55.939083    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:55.950678    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:55.950689    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:55.987363    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:35:55.987375    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:35:56.009385    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:35:56.009395    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:35:56.023944    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:35:56.023954    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:35:56.035918    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:35:56.035929    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:35:56.047579    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:35:56.047590    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:35:56.059505    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:56.059517    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:56.064032    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:35:56.064040    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:35:56.079947    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:35:56.079957    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:35:58.594081    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:03.596776    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:03.597081    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:03.632337    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:03.632498    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:03.653003    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:03.653116    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:03.667971    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:03.668065    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:03.680091    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:03.680173    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:03.690285    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:03.690365    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:03.701004    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:03.701086    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:03.711667    9127 logs.go:282] 0 containers: []
	W1211 15:36:03.711679    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:03.711746    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:03.724411    9127 logs.go:282] 0 containers: []
	W1211 15:36:03.724422    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:03.724428    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:03.724434    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:03.736789    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:03.736803    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:03.748802    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:03.748813    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:03.760474    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:03.760486    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:03.776324    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:03.776339    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:03.791293    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:03.791304    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:03.806032    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:03.806042    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:03.822527    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:03.822538    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:03.827463    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:03.827470    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:03.840180    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:03.840191    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:03.858096    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:03.858110    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:03.872779    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:03.872790    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:03.892771    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:03.892783    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:03.920462    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:03.920474    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:03.932531    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:03.932542    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:03.971840    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:03.971849    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:04.006694    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:04.006710    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:06.520200    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:11.522688    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:11.523197    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:11.562969    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:11.563133    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:11.584252    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:11.584358    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:11.599405    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:11.599496    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:11.611794    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:11.611872    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:11.622490    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:11.622572    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:11.633749    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:11.633833    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:11.643645    9127 logs.go:282] 0 containers: []
	W1211 15:36:11.643657    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:11.643726    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:11.654646    9127 logs.go:282] 0 containers: []
	W1211 15:36:11.654660    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:11.654666    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:11.654672    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:11.659511    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:11.659520    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:11.674206    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:11.674217    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:11.686655    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:11.686665    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:11.703850    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:11.703860    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:11.730322    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:11.730330    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:11.742418    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:11.742429    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:11.754066    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:11.754079    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:11.794422    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:11.794436    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:11.832144    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:11.832159    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:11.846840    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:11.846852    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:11.864678    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:11.864689    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:11.877658    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:11.877667    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:11.893480    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:11.893491    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:11.910073    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:11.910085    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:11.921511    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:11.921522    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:11.932682    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:11.932693    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:14.445162    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:19.446069    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:19.446293    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:19.471891    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:19.472011    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:19.486904    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:19.487000    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:19.499062    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:19.499136    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:19.509842    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:19.509929    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:19.520419    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:19.520505    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:19.531023    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:19.531111    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:19.541030    9127 logs.go:282] 0 containers: []
	W1211 15:36:19.541044    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:19.541113    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:19.551389    9127 logs.go:282] 0 containers: []
	W1211 15:36:19.551400    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:19.551406    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:19.551411    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:19.568299    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:19.568309    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:19.607201    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:19.607212    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:19.611637    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:19.611643    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:19.646668    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:19.646680    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:19.660702    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:19.660713    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:19.672222    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:19.672237    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:19.692723    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:19.692734    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:19.712182    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:19.712192    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:19.732367    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:19.732378    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:19.746579    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:19.746594    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:19.757663    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:19.757673    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:19.784095    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:19.784104    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:19.798042    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:19.798051    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:19.812428    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:19.812441    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:19.824265    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:19.824277    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:19.836215    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:19.836226    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:22.352592    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:27.354792    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:27.355046    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:27.376044    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:27.376139    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:27.390539    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:27.390628    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:27.402943    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:27.403029    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:27.418897    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:27.418982    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:27.429644    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:27.429729    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:27.440075    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:27.440160    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:27.452504    9127 logs.go:282] 0 containers: []
	W1211 15:36:27.452516    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:27.452584    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:27.462439    9127 logs.go:282] 0 containers: []
	W1211 15:36:27.462451    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:27.462456    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:27.462461    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:27.479830    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:27.479839    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:27.490992    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:27.491006    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:27.502660    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:27.502670    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:27.537133    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:27.537145    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:27.552007    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:27.552017    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:27.563491    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:27.563502    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:27.601849    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:27.601857    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:27.616002    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:27.616011    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:27.627334    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:27.627343    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:27.631563    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:27.631569    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:27.643291    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:27.643303    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:27.661481    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:27.661491    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:27.673637    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:27.673647    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:27.699027    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:27.699034    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:27.716228    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:27.716238    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:27.731452    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:27.731461    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:30.244994    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:35.247179    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:35.247379    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:35.260457    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:35.260551    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:35.271888    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:35.271964    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:35.287018    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:35.287095    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:35.298444    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:35.298522    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:35.309253    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:35.309339    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:35.320455    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:35.320540    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:35.330936    9127 logs.go:282] 0 containers: []
	W1211 15:36:35.330947    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:35.331012    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:35.340635    9127 logs.go:282] 0 containers: []
	W1211 15:36:35.340646    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:35.340652    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:35.340658    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:35.352147    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:35.352161    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:35.363536    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:35.363547    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:35.387576    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:35.387588    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:35.399693    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:35.399707    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:35.413696    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:35.413709    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:35.430101    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:35.430111    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:35.447375    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:35.447387    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:35.459065    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:35.459079    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:35.485561    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:35.485572    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:35.510394    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:35.510404    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:35.535054    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:35.535064    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:35.546945    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:35.546956    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:35.551209    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:35.551215    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:35.586616    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:35.586630    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:35.605504    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:35.605516    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:35.617391    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:35.617401    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:38.159583    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:43.161778    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:43.161972    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:43.179959    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:43.180078    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:43.194097    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:43.194190    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:43.206880    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:43.206968    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:43.217662    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:43.217744    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:43.228233    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:43.228315    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:43.239247    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:43.239326    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:43.249580    9127 logs.go:282] 0 containers: []
	W1211 15:36:43.249591    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:43.249655    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:43.260363    9127 logs.go:282] 0 containers: []
	W1211 15:36:43.260374    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:43.260380    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:43.260385    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:43.282332    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:43.282342    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:43.307781    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:43.307793    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:43.319772    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:43.319782    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:43.337690    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:43.337699    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:43.349632    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:43.349642    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:43.361331    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:43.361341    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:43.372217    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:43.372233    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:43.384605    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:43.384616    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:43.411285    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:43.411294    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:43.450671    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:43.450681    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:43.463350    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:43.463362    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:43.475664    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:43.475673    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:43.515743    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:43.515752    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:43.520046    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:43.520055    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:43.531931    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:43.531942    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:43.545784    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:43.545793    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:46.061996    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:51.064075    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:51.064225    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:51.078198    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:51.078293    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:51.089553    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:51.089637    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:51.101781    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:51.101861    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:51.112355    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:51.112431    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:51.122909    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:51.122978    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:51.133572    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:51.133666    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:51.148835    9127 logs.go:282] 0 containers: []
	W1211 15:36:51.148848    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:51.148931    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:51.161677    9127 logs.go:282] 0 containers: []
	W1211 15:36:51.161690    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:51.161695    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:51.161701    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:51.166354    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:51.166359    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:51.200491    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:51.200505    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:51.214856    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:51.214867    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:51.227605    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:51.227618    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:51.242845    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:51.242857    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:51.255928    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:51.255940    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:51.270714    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:51.270725    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:51.281996    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:51.282009    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:51.307778    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:51.307790    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:51.345995    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:51.346011    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:51.365464    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:51.365479    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:51.377972    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:51.377983    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:51.392123    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:51.392134    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:51.410086    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:51.410099    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:51.433319    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:51.433332    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:51.445281    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:51.445294    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:53.960233    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:58.962457    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:58.962643    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:58.981449    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:58.981541    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:58.993540    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:58.993625    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:59.004336    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:59.004414    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:59.014831    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:59.014902    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:59.024933    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:59.025012    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:59.035871    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:59.035947    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:59.045857    9127 logs.go:282] 0 containers: []
	W1211 15:36:59.045870    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:59.045939    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:59.055941    9127 logs.go:282] 0 containers: []
	W1211 15:36:59.055951    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:59.055957    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:59.055962    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:59.070303    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:59.070314    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:59.088462    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:59.088475    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:59.100555    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:59.100565    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:59.125877    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:59.125893    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:59.165317    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:59.165333    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:59.177706    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:59.177717    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:59.191974    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:59.191990    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:59.203387    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:59.203400    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:59.219346    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:59.219357    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:59.231523    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:59.231538    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:59.244077    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:59.244089    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:59.256525    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:59.256538    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:59.261502    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:59.261509    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:59.298072    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:59.298082    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:59.317637    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:59.317651    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:59.329098    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:59.329112    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:01.847791    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:06.850091    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:06.850376    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:06.877783    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:06.877920    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:06.894739    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:06.894835    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:06.914186    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:06.914265    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:06.925263    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:06.925347    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:06.936103    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:06.936175    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:06.946564    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:06.946649    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:06.956030    9127 logs.go:282] 0 containers: []
	W1211 15:37:06.956043    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:06.956107    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:06.966835    9127 logs.go:282] 0 containers: []
	W1211 15:37:06.966850    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:06.966856    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:06.966862    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:06.982299    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:06.982311    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:06.986650    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:06.986659    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:07.022108    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:07.022119    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:07.039493    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:07.039505    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:07.050800    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:07.050811    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:07.062852    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:07.062866    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:07.103438    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:07.103448    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:07.114695    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:07.114708    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:07.131495    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:07.131508    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:07.142992    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:07.143005    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:07.154486    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:07.154499    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:07.170379    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:07.170393    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:07.184238    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:07.184255    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:07.198207    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:07.198223    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:07.220473    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:07.220484    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:07.244868    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:07.244877    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:09.759057    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:14.761412    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:14.761746    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:14.788165    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:14.788317    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:14.806141    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:14.806252    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:14.819769    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:14.819856    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:14.832092    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:14.832159    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:14.842677    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:14.842755    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:14.854099    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:14.854176    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:14.864572    9127 logs.go:282] 0 containers: []
	W1211 15:37:14.864583    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:14.864649    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:14.876299    9127 logs.go:282] 0 containers: []
	W1211 15:37:14.876310    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:14.876316    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:14.876321    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:14.888008    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:14.888019    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:14.900680    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:14.900693    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:14.914479    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:14.914493    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:14.934307    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:14.934322    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:14.945695    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:14.945707    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:14.950047    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:14.950054    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:14.985957    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:14.985966    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:14.998993    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:14.999007    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:15.013589    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:15.013603    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:15.027807    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:15.027821    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:15.042460    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:15.042470    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:15.054043    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:15.054057    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:15.071380    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:15.071391    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:15.112098    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:15.112107    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:15.124962    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:15.124976    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:15.149936    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:15.149945    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:17.663022    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:22.665527    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:22.665747    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:22.686057    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:22.686159    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:22.705707    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:22.705799    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:22.717062    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:22.717146    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:22.728004    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:22.728109    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:22.738446    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:22.738531    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:22.749006    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:22.749102    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:22.758887    9127 logs.go:282] 0 containers: []
	W1211 15:37:22.758904    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:22.758965    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:22.768977    9127 logs.go:282] 0 containers: []
	W1211 15:37:22.768987    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:22.768993    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:22.768999    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:22.780892    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:22.780903    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:22.820981    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:22.820989    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:22.834260    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:22.834274    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:22.852073    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:22.852084    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:22.863186    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:22.863199    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:22.905041    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:22.905057    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:22.917731    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:22.917745    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:22.930172    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:22.930183    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:22.945455    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:22.945469    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:22.957525    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:22.957540    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:22.975465    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:22.975482    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:22.993084    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:22.993096    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:23.017491    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:23.017498    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:23.021822    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:23.021828    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:23.035018    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:23.035030    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:23.051691    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:23.051701    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:25.565415    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:30.567704    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:30.567902    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:30.587233    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:30.587351    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:30.602649    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:30.602728    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:30.614565    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:30.614645    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:30.625346    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:30.625436    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:30.635722    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:30.635804    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:30.652133    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:30.652206    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:30.665051    9127 logs.go:282] 0 containers: []
	W1211 15:37:30.665064    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:30.665138    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:30.675035    9127 logs.go:282] 0 containers: []
	W1211 15:37:30.675050    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:30.675060    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:30.675065    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:30.689050    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:30.689060    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:30.702395    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:30.702407    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:30.719640    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:30.719651    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:30.731599    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:30.731610    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:30.769497    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:30.769508    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:30.780906    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:30.780917    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:30.797486    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:30.797496    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:30.812090    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:30.812099    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:30.823330    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:30.823345    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:30.835547    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:30.835556    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:30.840326    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:30.840335    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:30.852735    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:30.852745    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:30.864012    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:30.864022    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:30.879108    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:30.879120    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:30.904657    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:30.904666    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:30.944450    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:30.944463    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:33.461564    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:38.463704    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:38.463849    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:38.476038    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:38.476108    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:38.486053    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:38.486130    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:38.500789    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:38.500867    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:38.511466    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:38.511551    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:38.525322    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:38.525402    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:38.535738    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:38.535820    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:38.545418    9127 logs.go:282] 0 containers: []
	W1211 15:37:38.545430    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:38.545487    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:38.556169    9127 logs.go:282] 0 containers: []
	W1211 15:37:38.556180    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:38.556186    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:38.556191    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:38.568620    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:38.568631    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:38.579537    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:38.579550    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:38.614280    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:38.614290    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:38.626886    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:38.626898    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:38.643354    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:38.643364    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:38.655569    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:38.655580    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:38.672338    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:38.672350    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:38.686686    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:38.686698    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:38.691886    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:38.691892    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:38.703316    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:38.703329    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:38.714889    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:38.714901    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:38.739433    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:38.739443    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:38.779347    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:38.779358    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:38.797097    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:38.797111    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:38.808063    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:38.808075    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:38.819845    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:38.819860    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:41.336592    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:46.338760    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:46.338935    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:46.349766    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:46.349857    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:46.361221    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:46.361298    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:46.372511    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:46.372594    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:46.383394    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:46.383478    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:46.395002    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:46.395078    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:46.405854    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:46.405933    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:46.416013    9127 logs.go:282] 0 containers: []
	W1211 15:37:46.416024    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:46.416093    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:46.428505    9127 logs.go:282] 0 containers: []
	W1211 15:37:46.428519    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:46.428525    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:46.428531    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:46.439786    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:46.439798    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:46.450864    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:46.450874    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:46.467257    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:46.467266    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:46.479028    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:46.479042    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:46.501708    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:46.501716    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:46.514000    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:46.514011    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:46.527963    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:46.527974    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:46.541896    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:46.541905    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:46.553180    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:46.553189    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:46.566069    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:46.566079    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:46.578537    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:46.578548    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:46.592656    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:46.592666    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:46.627909    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:46.627920    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:46.640436    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:46.640447    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:46.658659    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:46.658668    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:46.697780    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:46.697787    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:49.204439    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:54.206658    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:54.206863    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:54.224363    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:54.224463    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:54.236803    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:54.236891    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:54.247691    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:54.247776    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:54.258347    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:54.258424    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:54.272802    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:54.272883    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:54.283393    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:54.283481    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:54.293983    9127 logs.go:282] 0 containers: []
	W1211 15:37:54.293995    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:54.294065    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:54.304678    9127 logs.go:282] 0 containers: []
	W1211 15:37:54.304689    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:54.304694    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:54.304699    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:54.316465    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:54.316475    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:54.340385    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:54.340393    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:54.380320    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:54.380329    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:54.415605    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:54.415618    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:54.426807    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:54.426819    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:54.473109    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:54.473125    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:54.488007    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:54.488021    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:54.501595    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:54.501608    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:54.515220    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:54.515231    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:54.533645    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:54.533654    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:54.546141    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:54.546152    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:54.560629    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:54.560640    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:54.572374    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:54.572389    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:54.588915    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:54.588926    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:54.599991    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:54.600006    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:54.604467    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:54.604476    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:57.123513    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:02.125792    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:02.125927    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:38:02.138734    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:38:02.138830    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:38:02.149906    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:38:02.149981    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:38:02.160750    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:38:02.160834    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:38:02.171387    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:38:02.171472    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:38:02.182078    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:38:02.182164    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:38:02.193142    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:38:02.193222    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:38:02.203389    9127 logs.go:282] 0 containers: []
	W1211 15:38:02.203401    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:38:02.203466    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:38:02.213504    9127 logs.go:282] 0 containers: []
	W1211 15:38:02.213517    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:38:02.213522    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:38:02.213528    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:38:02.226166    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:38:02.226177    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:38:02.249249    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:38:02.249257    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:38:02.287514    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:38:02.287523    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:38:02.300270    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:38:02.300283    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:38:02.312008    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:38:02.312020    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:38:02.323469    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:38:02.323481    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:38:02.339993    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:38:02.340003    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:38:02.357412    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:38:02.357424    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:38:02.370344    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:38:02.370355    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:38:02.406071    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:38:02.406092    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:38:02.424676    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:38:02.424686    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:38:02.438538    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:38:02.438551    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:38:02.458084    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:38:02.458095    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:38:02.469948    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:38:02.469957    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:38:02.481487    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:38:02.481500    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:38:02.486533    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:38:02.486541    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:38:05.003098    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:10.005301    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:10.005448    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:38:10.018540    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:38:10.018631    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:38:10.029127    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:38:10.029205    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:38:10.039844    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:38:10.039923    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:38:10.049981    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:38:10.050057    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:38:10.060298    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:38:10.060364    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:38:10.070704    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:38:10.070776    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:38:10.080672    9127 logs.go:282] 0 containers: []
	W1211 15:38:10.080685    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:38:10.080754    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:38:10.092381    9127 logs.go:282] 0 containers: []
	W1211 15:38:10.092395    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:38:10.092400    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:38:10.092406    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:38:10.107306    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:38:10.107317    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:38:10.118778    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:38:10.118790    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:38:10.130794    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:38:10.130806    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:38:10.141994    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:38:10.142002    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:38:10.165850    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:38:10.165862    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:38:10.202759    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:38:10.202775    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:38:10.217315    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:38:10.217327    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:38:10.231195    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:38:10.231207    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:38:10.272812    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:38:10.272821    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:38:10.286673    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:38:10.286685    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:38:10.304785    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:38:10.304798    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:38:10.317311    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:38:10.317325    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:38:10.333425    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:38:10.333436    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:38:10.345005    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:38:10.345015    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:38:10.356497    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:38:10.356506    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:38:10.360916    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:38:10.360923    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:38:12.874273    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:17.876816    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:17.876954    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:38:17.889031    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:38:17.889113    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:38:17.900986    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:38:17.901068    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:38:17.912700    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:38:17.912790    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:38:17.925407    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:38:17.925490    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:38:17.937156    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:38:17.937241    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:38:17.948932    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:38:17.949029    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:38:17.960750    9127 logs.go:282] 0 containers: []
	W1211 15:38:17.960762    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:38:17.960834    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:38:17.972171    9127 logs.go:282] 0 containers: []
	W1211 15:38:17.972183    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:38:17.972189    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:38:17.972195    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:38:17.977354    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:38:17.977367    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:38:17.992191    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:38:17.992201    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:38:18.004885    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:38:18.004896    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:38:18.016731    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:38:18.016744    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:38:18.030007    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:38:18.030020    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:38:18.042690    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:38:18.042722    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:38:18.061668    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:38:18.061683    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:38:18.076237    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:38:18.076250    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:38:18.112925    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:38:18.112939    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:38:18.125527    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:38:18.125542    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:38:18.137490    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:38:18.137506    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:38:18.179115    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:38:18.179127    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:38:18.193782    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:38:18.193796    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:38:18.206455    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:38:18.206468    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:38:18.226124    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:38:18.226137    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:38:18.249627    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:38:18.249648    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:38:20.767088    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:25.769215    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:25.769360    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:38:25.784601    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:38:25.784686    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:38:25.795521    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:38:25.795607    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:38:25.806598    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:38:25.806685    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:38:25.817537    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:38:25.817622    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:38:25.828376    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:38:25.828459    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:38:25.839024    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:38:25.839114    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:38:25.853349    9127 logs.go:282] 0 containers: []
	W1211 15:38:25.853364    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:38:25.853435    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:38:25.863955    9127 logs.go:282] 0 containers: []
	W1211 15:38:25.863969    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:38:25.863975    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:38:25.863982    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:38:25.877795    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:38:25.877806    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:38:25.895097    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:38:25.895112    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:38:25.906829    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:38:25.906844    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:38:25.946095    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:38:25.946112    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:38:25.962364    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:38:25.962377    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:38:25.974191    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:38:25.974203    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:38:25.991294    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:38:25.991308    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:38:26.027879    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:38:26.027891    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:38:26.043892    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:38:26.043906    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:38:26.058473    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:38:26.058484    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:38:26.070049    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:38:26.070070    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:38:26.082215    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:38:26.082226    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:38:26.086651    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:38:26.086657    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:38:26.101532    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:38:26.101543    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:38:26.113140    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:38:26.113151    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:38:26.124971    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:38:26.124980    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:38:28.647695    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:33.649025    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:33.649077    9127 kubeadm.go:597] duration metric: took 4m3.816557083s to restartPrimaryControlPlane
	W1211 15:38:33.649120    9127 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1211 15:38:33.649137    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1211 15:38:34.612826    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 15:38:34.618438    9127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 15:38:34.621457    9127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 15:38:34.624132    9127 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 15:38:34.624139    9127 kubeadm.go:157] found existing configuration files:
	
	I1211 15:38:34.624177    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/admin.conf
	I1211 15:38:34.626771    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 15:38:34.626805    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 15:38:34.630235    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/kubelet.conf
	I1211 15:38:34.633330    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 15:38:34.633615    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 15:38:34.636119    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/controller-manager.conf
	I1211 15:38:34.638672    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 15:38:34.638707    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 15:38:34.641521    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/scheduler.conf
	I1211 15:38:34.643922    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 15:38:34.643948    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 15:38:34.646752    9127 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 15:38:34.663853    9127 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1211 15:38:34.663882    9127 kubeadm.go:310] [preflight] Running pre-flight checks
	I1211 15:38:34.717423    9127 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 15:38:34.717471    9127 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 15:38:34.717527    9127 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1211 15:38:34.766985    9127 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 15:38:34.771164    9127 out.go:235]   - Generating certificates and keys ...
	I1211 15:38:34.771205    9127 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1211 15:38:34.771247    9127 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1211 15:38:34.771294    9127 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 15:38:34.771329    9127 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1211 15:38:34.771369    9127 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 15:38:34.771402    9127 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1211 15:38:34.771444    9127 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1211 15:38:34.771475    9127 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1211 15:38:34.771516    9127 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 15:38:34.771559    9127 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 15:38:34.771577    9127 kubeadm.go:310] [certs] Using the existing "sa" key
	I1211 15:38:34.771607    9127 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 15:38:34.843859    9127 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 15:38:35.070884    9127 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 15:38:35.223662    9127 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 15:38:35.492817    9127 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 15:38:35.520835    9127 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 15:38:35.522119    9127 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 15:38:35.522145    9127 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1211 15:38:35.613696    9127 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 15:38:35.617008    9127 out.go:235]   - Booting up control plane ...
	I1211 15:38:35.617068    9127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 15:38:35.617113    9127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 15:38:35.617154    9127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 15:38:35.617200    9127 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 15:38:35.617275    9127 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1211 15:38:39.618549    9127 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002155 seconds
	I1211 15:38:39.618647    9127 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 15:38:39.624721    9127 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 15:38:40.133584    9127 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 15:38:40.133727    9127 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-031000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 15:38:40.637075    9127 kubeadm.go:310] [bootstrap-token] Using token: o2tufw.jgisq56w1ljinhvv
	I1211 15:38:40.639718    9127 out.go:235]   - Configuring RBAC rules ...
	I1211 15:38:40.639767    9127 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 15:38:40.639811    9127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 15:38:40.641637    9127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 15:38:40.643514    9127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 15:38:40.644475    9127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 15:38:40.645293    9127 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 15:38:40.648307    9127 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 15:38:40.824710    9127 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1211 15:38:41.041186    9127 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1211 15:38:41.041676    9127 kubeadm.go:310] 
	I1211 15:38:41.041708    9127 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1211 15:38:41.041713    9127 kubeadm.go:310] 
	I1211 15:38:41.041753    9127 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1211 15:38:41.041759    9127 kubeadm.go:310] 
	I1211 15:38:41.041823    9127 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1211 15:38:41.041870    9127 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 15:38:41.041901    9127 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 15:38:41.041918    9127 kubeadm.go:310] 
	I1211 15:38:41.041962    9127 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1211 15:38:41.041966    9127 kubeadm.go:310] 
	I1211 15:38:41.041992    9127 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 15:38:41.041998    9127 kubeadm.go:310] 
	I1211 15:38:41.042023    9127 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1211 15:38:41.042093    9127 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 15:38:41.042134    9127 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 15:38:41.042136    9127 kubeadm.go:310] 
	I1211 15:38:41.042197    9127 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 15:38:41.042238    9127 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1211 15:38:41.042241    9127 kubeadm.go:310] 
	I1211 15:38:41.042295    9127 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o2tufw.jgisq56w1ljinhvv \
	I1211 15:38:41.042360    9127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d49e2bb776362b8f3de097afdeb999a6cd72c9e172f75d4b314d4105a8117ae2 \
	I1211 15:38:41.042374    9127 kubeadm.go:310] 	--control-plane 
	I1211 15:38:41.042377    9127 kubeadm.go:310] 
	I1211 15:38:41.042418    9127 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1211 15:38:41.042423    9127 kubeadm.go:310] 
	I1211 15:38:41.042458    9127 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o2tufw.jgisq56w1ljinhvv \
	I1211 15:38:41.042508    9127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d49e2bb776362b8f3de097afdeb999a6cd72c9e172f75d4b314d4105a8117ae2 
	I1211 15:38:41.042563    9127 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 15:38:41.042568    9127 cni.go:84] Creating CNI manager for ""
	I1211 15:38:41.042575    9127 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:38:41.046345    9127 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1211 15:38:41.053330    9127 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1211 15:38:41.056425    9127 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1211 15:38:41.061333    9127 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 15:38:41.061388    9127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 15:38:41.061393    9127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-031000 minikube.k8s.io/updated_at=2024_12_11T15_38_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=running-upgrade-031000 minikube.k8s.io/primary=true
	I1211 15:38:41.104329    9127 ops.go:34] apiserver oom_adj: -16
	I1211 15:38:41.104336    9127 kubeadm.go:1113] duration metric: took 42.998083ms to wait for elevateKubeSystemPrivileges
	I1211 15:38:41.104345    9127 kubeadm.go:394] duration metric: took 4m11.292743208s to StartCluster
	I1211 15:38:41.104354    9127 settings.go:142] acquiring lock: {Name:mk7be6692255448ff6d4be3295ef81ca16b62a5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:38:41.104437    9127 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:38:41.104825    9127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/kubeconfig: {Name:mkbb4a262cd8684046b6244fd6ca1d80f2c17ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:38:41.105037    9127 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:38:41.105107    9127 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 15:38:41.105144    9127 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-031000"
	I1211 15:38:41.105152    9127 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-031000"
	W1211 15:38:41.105157    9127 addons.go:243] addon storage-provisioner should already be in state true
	I1211 15:38:41.105167    9127 host.go:66] Checking if "running-upgrade-031000" exists ...
	I1211 15:38:41.105150    9127 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-031000"
	I1211 15:38:41.105185    9127 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-031000"
	I1211 15:38:41.105245    9127 config.go:182] Loaded profile config "running-upgrade-031000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1211 15:38:41.105580    9127 retry.go:31] will retry after 1.415908019s: connect: dial unix /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/monitor: connect: connection refused
	I1211 15:38:41.108391    9127 out.go:177] * Verifying Kubernetes components...
	I1211 15:38:41.116323    9127 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:38:41.119296    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:38:41.123372    9127 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 15:38:41.123380    9127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 15:38:41.123386    9127 sshutil.go:53] new ssh client: &{IP:localhost Port:61422 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/id_rsa Username:docker}
	I1211 15:38:41.218346    9127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 15:38:41.224279    9127 api_server.go:52] waiting for apiserver process to appear ...
	I1211 15:38:41.224345    9127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:38:41.229041    9127 api_server.go:72] duration metric: took 123.995541ms to wait for apiserver process to appear ...
	I1211 15:38:41.229051    9127 api_server.go:88] waiting for apiserver healthz status ...
	I1211 15:38:41.229059    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:41.235479    9127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 15:38:42.524656    9127 kapi.go:59] client config for running-upgrade-031000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/client.key", CAFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1044bc0b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 15:38:42.524813    9127 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-031000"
	W1211 15:38:42.524821    9127 addons.go:243] addon default-storageclass should already be in state true
	I1211 15:38:42.524843    9127 host.go:66] Checking if "running-upgrade-031000" exists ...
	I1211 15:38:42.525580    9127 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 15:38:42.525587    9127 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 15:38:42.525615    9127 sshutil.go:53] new ssh client: &{IP:localhost Port:61422 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/id_rsa Username:docker}
	I1211 15:38:42.568397    9127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 15:38:42.657406    9127 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 15:38:42.657418    9127 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 15:38:46.230997    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:46.232048    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:51.232460    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:51.232483    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:56.232960    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:56.233016    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:01.234032    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:01.234073    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:06.235137    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:06.235160    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:11.236414    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:11.236454    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1211 15:39:12.658878    9127 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1211 15:39:12.663226    9127 out.go:177] * Enabled addons: storage-provisioner
	I1211 15:39:12.670134    9127 addons.go:510] duration metric: took 31.566017333s for enable addons: enabled=[storage-provisioner]
	I1211 15:39:16.238145    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:16.238177    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:21.238613    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:21.238644    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:26.240712    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:26.240755    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:31.242803    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:31.242830    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:36.244638    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:36.244682    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:41.246813    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:41.246910    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:41.258572    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:39:41.258664    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:41.269123    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:39:41.269201    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:41.280344    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:39:41.280423    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:41.291020    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:39:41.291095    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:41.301818    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:39:41.301889    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:41.312706    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:39:41.312774    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:41.327869    9127 logs.go:282] 0 containers: []
	W1211 15:39:41.327880    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:41.327951    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:41.338545    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:39:41.338560    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:39:41.338568    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:39:41.349514    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:39:41.349525    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:39:41.364579    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:39:41.364589    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:39:41.376143    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:39:41.376153    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:39:41.394002    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:39:41.394012    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:39:41.408086    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:39:41.408097    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:39:41.422174    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:39:41.422184    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:39:41.434216    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:39:41.434227    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:39:41.449669    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:41.449680    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:41.474289    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:41.474297    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:41.509199    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:41.509206    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:41.513622    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:41.513631    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:41.555639    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:39:41.555650    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:44.069609    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:49.071822    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:49.071968    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:49.085392    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:39:49.085483    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:49.096871    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:39:49.096952    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:49.107432    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:39:49.107509    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:49.117601    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:39:49.117687    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:49.128416    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:39:49.128504    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:49.139308    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:39:49.139392    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:49.149188    9127 logs.go:282] 0 containers: []
	W1211 15:39:49.149199    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:49.149279    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:49.160321    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:39:49.160336    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:49.160342    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:49.196934    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:39:49.196955    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:39:49.211673    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:39:49.211687    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:39:49.224303    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:39:49.224316    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:39:49.240341    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:39:49.240351    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:39:49.257868    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:49.257883    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:49.282643    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:39:49.282651    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:49.296649    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:49.296660    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:49.301525    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:49.301532    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:49.336036    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:39:49.336047    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:39:49.350876    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:39:49.350886    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:39:49.362251    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:39:49.362261    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:39:49.373739    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:39:49.373751    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:39:51.886980    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:56.889127    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:56.889295    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:56.902845    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:39:56.902918    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:56.913681    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:39:56.913751    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:56.924610    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:39:56.924692    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:56.935015    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:39:56.935101    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:56.945366    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:39:56.945439    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:56.955797    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:39:56.955878    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:56.965952    9127 logs.go:282] 0 containers: []
	W1211 15:39:56.965964    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:56.966030    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:56.976773    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:39:56.976789    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:56.976795    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:57.010713    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:57.010724    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:57.045126    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:39:57.045139    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:39:57.059111    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:39:57.059121    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:39:57.072347    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:39:57.072359    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:39:57.084158    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:39:57.084169    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:39:57.099854    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:39:57.099866    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:39:57.112104    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:39:57.112115    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:39:57.131105    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:39:57.131120    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:57.143292    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:57.143302    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:57.148422    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:39:57.148429    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:39:57.160404    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:39:57.160414    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:39:57.177919    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:57.177932    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:59.704366    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:04.706429    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:04.706585    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:04.717839    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:04.717926    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:04.728369    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:04.728451    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:04.739447    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:04.739530    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:04.750473    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:04.750558    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:04.761039    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:04.761110    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:04.772013    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:04.772090    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:04.786749    9127 logs.go:282] 0 containers: []
	W1211 15:40:04.786762    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:04.786830    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:04.797125    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:04.797142    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:04.797147    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:04.831920    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:04.831929    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:04.846427    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:04.846438    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:04.862231    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:04.862245    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:04.875958    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:04.875969    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:04.899621    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:04.899631    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:04.911727    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:04.911738    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:04.929446    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:04.929456    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:04.940677    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:04.940687    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:04.945235    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:04.945242    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:04.985796    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:04.985810    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:04.997933    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:04.997947    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:05.009468    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:05.009482    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:07.525882    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:12.527976    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:12.528095    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:12.545802    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:12.545885    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:12.556255    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:12.556335    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:12.566689    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:12.566761    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:12.576872    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:12.576942    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:12.587425    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:12.587500    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:12.598130    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:12.598199    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:12.612045    9127 logs.go:282] 0 containers: []
	W1211 15:40:12.612056    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:12.612120    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:12.626292    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:12.626313    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:12.626318    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:12.641224    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:12.641233    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:12.665463    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:12.665486    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:12.678531    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:12.678541    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:12.683212    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:12.683220    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:12.721191    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:12.721202    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:12.735682    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:12.735695    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:12.747965    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:12.747976    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:12.760140    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:12.760155    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:12.777654    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:12.777664    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:12.789841    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:12.789852    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:12.825089    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:12.825098    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:12.843131    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:12.843142    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:15.359815    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:20.361901    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:20.362033    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:20.373785    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:20.373862    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:20.385099    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:20.385175    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:20.395797    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:20.395864    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:20.406534    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:20.406615    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:20.424063    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:20.424143    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:20.434759    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:20.434826    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:20.444903    9127 logs.go:282] 0 containers: []
	W1211 15:40:20.444915    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:20.444981    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:20.457078    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:20.457093    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:20.457098    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:20.473047    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:20.473058    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:20.493690    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:20.493700    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:20.508321    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:20.508332    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:20.532774    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:20.532785    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:20.544293    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:20.544305    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:20.582351    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:20.582361    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:20.593924    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:20.593934    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:20.613245    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:20.613259    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:20.627285    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:20.627295    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:20.643243    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:20.643257    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:20.655767    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:20.655777    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:20.691101    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:20.691109    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:23.197266    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:28.199435    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:28.199595    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:28.211708    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:28.211797    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:28.222695    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:28.222766    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:28.233552    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:28.233621    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:28.246218    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:28.246299    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:28.257340    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:28.257417    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:28.268075    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:28.268158    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:28.278237    9127 logs.go:282] 0 containers: []
	W1211 15:40:28.278247    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:28.278312    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:28.289348    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:28.289362    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:28.289368    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:28.303325    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:28.303336    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:28.315507    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:28.315521    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:28.331259    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:28.331270    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:28.354788    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:28.354798    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:28.367629    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:28.367640    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:28.382491    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:28.382501    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:28.387608    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:28.387615    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:28.424642    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:28.424653    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:28.435913    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:28.435922    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:28.447326    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:28.447337    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:28.464831    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:28.464840    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:28.481497    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:28.481511    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:31.018706    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:36.020810    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:36.020992    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:36.031803    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:36.031887    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:36.042468    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:36.042542    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:36.055674    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:36.055754    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:36.065958    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:36.066051    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:36.076211    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:36.076280    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:36.087142    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:36.087218    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:36.097533    9127 logs.go:282] 0 containers: []
	W1211 15:40:36.097545    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:36.097610    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:36.108284    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:36.108299    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:36.108305    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:36.112995    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:36.113006    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:36.146999    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:36.147013    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:36.161239    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:36.161253    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:36.178425    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:36.178435    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:36.190523    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:36.190534    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:36.214037    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:36.214045    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:36.225772    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:36.225785    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:36.263121    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:36.263139    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:36.281417    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:36.281432    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:36.295115    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:36.295132    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:36.308361    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:36.308377    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:36.333575    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:36.333592    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:38.849373    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:43.849908    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:43.850051    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:43.861271    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:43.861350    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:43.871792    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:43.871867    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:43.882507    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:43.882590    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:43.893372    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:43.893451    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:43.904182    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:43.904259    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:43.914687    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:43.914762    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:43.925023    9127 logs.go:282] 0 containers: []
	W1211 15:40:43.925033    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:43.925094    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:43.935335    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:43.935351    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:43.935357    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:43.939893    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:43.939900    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:43.951207    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:43.951220    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:43.963167    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:43.963177    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:43.975572    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:43.975582    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:43.993027    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:43.993037    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:44.004243    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:44.004258    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:44.039201    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:44.039209    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:44.055140    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:44.055150    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:44.069084    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:44.069094    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:44.085347    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:44.085363    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:44.110711    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:44.110727    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:44.122154    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:44.122170    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:46.657861    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:51.659843    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:51.660000    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:51.674817    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:51.674902    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:51.685254    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:51.685335    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:51.697236    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:51.697317    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:51.707724    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:51.707796    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:51.717644    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:51.717732    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:51.736096    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:51.736172    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:51.746187    9127 logs.go:282] 0 containers: []
	W1211 15:40:51.746198    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:51.746265    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:51.756816    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:51.756833    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:51.756839    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:51.768282    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:51.768293    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:51.793198    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:51.793206    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:51.828482    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:51.828494    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:51.833336    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:51.833343    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:51.869740    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:51.869751    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:51.887968    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:51.887979    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:51.902363    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:51.902374    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:51.917314    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:51.917324    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:51.928639    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:51.928649    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:51.940486    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:51.940495    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:51.952238    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:51.952248    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:51.964079    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:51.964089    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:54.487047    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:59.487647    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:59.487751    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:59.498750    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:59.498828    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:59.509496    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:59.509575    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:59.520460    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:40:59.520537    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:59.532401    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:59.532485    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:59.548504    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:59.548583    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:59.559604    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:59.559678    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:59.569795    9127 logs.go:282] 0 containers: []
	W1211 15:40:59.569806    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:59.569866    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:59.580714    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:59.580732    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:59.580738    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:59.605954    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:59.605962    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:59.641746    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:59.641757    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:59.660659    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:40:59.660668    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:40:59.671826    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:59.671838    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:59.683690    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:59.683701    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:59.695675    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:59.695687    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:59.715665    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:59.715676    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:59.727457    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:59.727468    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:59.739135    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:59.739146    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:59.772319    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:40:59.772328    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:40:59.786753    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:59.786767    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:59.802602    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:59.802612    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:59.814224    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:59.814238    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:59.819388    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:59.819394    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:02.342782    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:07.344981    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:07.345130    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:07.356012    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:07.356096    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:07.366319    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:07.366394    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:07.377688    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:07.377762    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:07.388689    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:07.388759    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:07.399396    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:07.399474    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:07.409931    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:07.410010    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:07.420312    9127 logs.go:282] 0 containers: []
	W1211 15:41:07.420324    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:07.420386    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:07.435953    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:07.435970    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:07.435976    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:07.447602    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:07.447614    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:07.465111    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:07.465122    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:07.477639    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:07.477651    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:07.482743    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:07.482751    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:07.494809    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:07.494819    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:07.506635    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:07.506649    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:07.519383    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:07.519393    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:07.532270    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:07.532281    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:07.567614    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:07.567624    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:07.608436    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:07.608448    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:07.623331    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:07.623342    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:07.637222    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:07.637234    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:07.650143    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:07.650156    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:07.675542    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:07.675554    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:10.200669    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:15.203169    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:15.203275    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:15.214186    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:15.214267    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:15.224734    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:15.224813    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:15.235584    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:15.235665    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:15.246124    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:15.246198    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:15.256720    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:15.256802    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:15.267154    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:15.267229    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:15.278018    9127 logs.go:282] 0 containers: []
	W1211 15:41:15.278029    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:15.278096    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:15.288980    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:15.288997    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:15.289003    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:15.294257    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:15.294264    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:15.305642    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:15.305653    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:15.317892    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:15.317903    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:15.351327    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:15.351336    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:15.366663    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:15.366673    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:15.378185    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:15.378196    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:15.399765    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:15.399775    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:15.411498    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:15.411508    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:15.430339    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:15.430349    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:15.442033    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:15.442044    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:15.465496    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:15.465504    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:15.499351    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:15.499362    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:15.513754    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:15.513765    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:15.525412    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:15.525424    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:18.043154    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:23.044963    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:23.045063    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:23.056652    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:23.056784    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:23.069707    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:23.069780    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:23.080437    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:23.080516    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:23.091366    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:23.091437    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:23.103199    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:23.103276    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:23.113577    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:23.113650    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:23.124155    9127 logs.go:282] 0 containers: []
	W1211 15:41:23.124170    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:23.124226    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:23.134825    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:23.134843    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:23.134848    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:23.168694    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:23.168703    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:23.204304    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:23.204314    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:23.215915    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:23.215928    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:23.227428    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:23.227443    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:23.252552    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:23.252567    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:23.267059    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:23.267076    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:23.278218    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:23.278233    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:23.302192    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:23.302201    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:23.314526    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:23.314536    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:23.326499    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:23.326512    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:23.337950    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:23.337962    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:23.342427    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:23.342432    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:23.358220    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:23.358233    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:23.370948    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:23.370958    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:25.887965    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:30.890061    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:30.890181    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:30.904980    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:30.905063    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:30.916627    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:30.916711    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:30.929149    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:30.929232    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:30.940185    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:30.940260    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:30.951309    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:30.951388    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:30.961965    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:30.962040    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:30.972314    9127 logs.go:282] 0 containers: []
	W1211 15:41:30.972325    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:30.972393    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:30.986095    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:30.986113    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:30.986118    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:30.997776    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:30.997790    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:31.009850    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:31.009862    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:31.022387    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:31.022398    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:31.057089    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:31.057098    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:31.090963    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:31.090972    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:31.105422    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:31.105432    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:31.118470    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:31.118482    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:31.130682    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:31.130691    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:31.135172    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:31.135179    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:31.149386    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:31.149396    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:31.164872    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:31.164883    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:31.190560    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:31.190571    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:31.205320    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:31.205332    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:31.222447    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:31.222458    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:33.735205    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:38.737252    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:38.737355    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:38.748959    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:38.749039    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:38.761034    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:38.761142    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:38.772497    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:38.772584    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:38.784177    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:38.784254    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:38.795103    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:38.795185    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:38.805880    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:38.805960    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:38.815974    9127 logs.go:282] 0 containers: []
	W1211 15:41:38.815987    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:38.816060    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:38.826574    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:38.826591    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:38.826596    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:38.838538    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:38.838548    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:38.853979    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:38.853993    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:38.865379    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:38.865389    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:38.877588    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:38.877598    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:38.889631    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:38.889646    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:38.923014    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:38.923029    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:38.937760    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:38.937778    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:38.972779    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:38.972790    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:38.990949    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:38.990964    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:39.003099    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:39.003112    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:39.014970    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:39.014980    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:39.033536    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:39.033549    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:39.059038    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:39.059053    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:39.071121    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:39.071131    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:41.578169    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:46.579251    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:46.579362    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:46.591049    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:46.591138    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:46.602131    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:46.602209    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:46.613890    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:46.613977    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:46.625343    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:46.625421    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:46.636791    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:46.636869    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:46.647744    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:46.647825    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:46.660052    9127 logs.go:282] 0 containers: []
	W1211 15:41:46.660063    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:46.660132    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:46.670925    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:46.670941    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:46.670947    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:46.706778    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:46.706789    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:46.722270    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:46.722281    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:46.756316    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:46.756333    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:46.762193    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:46.762201    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:46.779091    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:46.779102    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:46.790406    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:46.790415    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:46.804706    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:46.804715    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:46.820199    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:46.820213    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:46.835452    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:46.835465    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:46.852565    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:46.852576    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:46.864427    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:46.864437    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:46.884233    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:46.884243    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:46.908092    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:46.908102    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:46.920812    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:46.920823    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:49.448156    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:54.448418    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:54.448513    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:54.462067    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:54.462148    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:54.473880    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:54.473959    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:54.484890    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:54.484973    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:54.495713    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:54.495786    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:54.506241    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:54.506310    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:54.520568    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:54.520639    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:54.532358    9127 logs.go:282] 0 containers: []
	W1211 15:41:54.532374    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:54.532442    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:54.543191    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:54.543213    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:54.543218    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:54.560959    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:54.560973    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:54.583879    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:54.583885    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:54.598325    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:54.598335    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:54.613813    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:54.613826    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:54.625518    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:54.625530    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:54.637767    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:54.637776    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:54.670993    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:54.671010    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:54.706794    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:54.706807    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:54.719005    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:54.719015    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:54.730895    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:54.730906    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:54.742796    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:54.742807    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:54.762365    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:54.762376    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:54.767316    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:54.767324    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:54.789022    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:54.789033    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:57.309394    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:02.310117    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:02.310216    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:02.322099    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:42:02.322201    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:02.333899    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:42:02.333980    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:02.345893    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:42:02.345971    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:02.356050    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:42:02.356132    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:02.366456    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:42:02.366535    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:02.377673    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:42:02.377756    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:02.387944    9127 logs.go:282] 0 containers: []
	W1211 15:42:02.387958    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:02.388027    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:02.398294    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:42:02.398311    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:42:02.398316    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:42:02.412342    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:02.412354    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:02.436255    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:02.436265    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:02.440678    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:42:02.440686    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:42:02.455879    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:42:02.455890    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:42:02.471186    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:42:02.471199    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:42:02.482760    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:42:02.482772    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:02.494940    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:42:02.494953    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:42:02.507649    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:42:02.507664    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:42:02.519944    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:42:02.519955    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:42:02.533576    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:42:02.533589    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:42:02.552076    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:02.552090    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:02.586069    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:02.586080    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:02.624283    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:42:02.624294    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:42:02.636039    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:42:02.636056    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:42:05.149440    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:10.149898    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:10.149996    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:10.161379    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:42:10.161463    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:10.175221    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:42:10.175302    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:10.186431    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:42:10.186516    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:10.197143    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:42:10.197226    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:10.212067    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:42:10.212151    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:10.222449    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:42:10.222525    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:10.233177    9127 logs.go:282] 0 containers: []
	W1211 15:42:10.233188    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:10.233267    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:10.243773    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:42:10.243792    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:10.243797    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:10.268502    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:42:10.268510    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:42:10.287156    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:42:10.287166    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:42:10.298852    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:10.298863    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:10.333555    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:42:10.333564    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:42:10.345867    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:42:10.345879    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:42:10.361909    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:42:10.361918    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:42:10.379772    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:42:10.379786    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:42:10.391561    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:10.391576    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:10.426898    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:42:10.426913    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:42:10.439253    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:42:10.439266    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:42:10.451066    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:42:10.451077    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:42:10.466500    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:42:10.466511    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:10.478187    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:10.478200    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:10.483418    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:42:10.483428    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:42:13.000207    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:18.002342    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:18.002467    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:18.026146    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:42:18.026230    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:18.042685    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:42:18.042769    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:18.054404    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:42:18.054487    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:18.065035    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:42:18.065117    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:18.075385    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:42:18.075459    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:18.087131    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:42:18.087211    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:18.101015    9127 logs.go:282] 0 containers: []
	W1211 15:42:18.101026    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:18.101091    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:18.111945    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:42:18.111962    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:42:18.111967    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:42:18.126976    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:42:18.126986    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:42:18.138588    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:42:18.138599    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:42:18.150591    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:42:18.150605    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:42:18.162057    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:42:18.162067    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:42:18.173466    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:18.173480    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:18.177895    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:42:18.177901    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:42:18.191565    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:42:18.191575    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:42:18.206465    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:42:18.206477    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:42:18.222737    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:42:18.222747    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:42:18.239948    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:18.239957    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:18.275108    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:42:18.275118    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:42:18.287104    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:18.287117    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:18.312516    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:42:18.312524    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:18.324370    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:18.324386    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:20.860891    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:25.861242    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:25.861318    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:25.873050    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:42:25.873130    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:25.884022    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:42:25.884101    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:25.895370    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:42:25.895457    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:25.907535    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:42:25.907583    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:25.919982    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:42:25.920032    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:25.931536    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:42:25.931604    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:25.943101    9127 logs.go:282] 0 containers: []
	W1211 15:42:25.943112    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:25.943163    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:25.955378    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:42:25.955395    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:25.955400    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:25.994976    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:42:25.994990    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:42:26.010402    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:42:26.010436    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:42:26.027013    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:42:26.027023    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:42:26.042064    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:26.042076    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:26.077759    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:42:26.077778    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:42:26.090065    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:42:26.090074    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:42:26.102391    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:42:26.102408    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:42:26.113994    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:42:26.114004    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:26.127625    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:26.127639    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:26.152887    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:26.152902    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:26.157560    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:42:26.157572    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:42:26.171218    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:42:26.171228    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:42:26.183539    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:42:26.183553    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:42:26.200248    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:42:26.200259    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:42:28.720424    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:33.721166    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:33.721353    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:33.739646    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:42:33.739752    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:33.753243    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:42:33.753332    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:33.764746    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:42:33.764834    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:33.775177    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:42:33.775252    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:33.785629    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:42:33.785713    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:33.796033    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:42:33.796109    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:33.806183    9127 logs.go:282] 0 containers: []
	W1211 15:42:33.806196    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:33.806258    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:33.816765    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:42:33.816783    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:33.816789    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:33.851634    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:42:33.851641    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:42:33.866352    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:42:33.866367    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:42:33.878030    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:42:33.878043    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:42:33.889746    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:33.889755    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:33.913948    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:42:33.913954    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:42:33.925676    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:42:33.925685    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:42:33.942991    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:33.943004    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:33.979997    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:42:33.980011    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:42:33.995105    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:42:33.995114    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:42:34.008818    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:42:34.008828    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:42:34.020908    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:34.020918    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:34.025569    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:42:34.025579    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:42:34.037572    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:42:34.037582    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:42:34.052444    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:42:34.052454    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:36.567580    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:41.569699    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:41.573763    9127 out.go:201] 
	W1211 15:42:41.577575    9127 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1211 15:42:41.577582    9127 out.go:270] * 
	* 
	W1211 15:42:41.578115    9127 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:42:41.589686    9127 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-031000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-12-11 15:42:41.6782 -0800 PST m=+1271.555316251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-031000 -n running-upgrade-031000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-031000 -n running-upgrade-031000: exit status 2 (15.567120792s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-031000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-736000 sudo                                | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-736000 sudo                                | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-736000 sudo cat                            | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-736000 sudo cat                            | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-736000 sudo                                | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-736000 sudo                                | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-736000 sudo                                | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-736000 sudo cat                            | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-736000 sudo cat                            | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-736000 sudo                                | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-736000 sudo                                | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-736000 sudo                                | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-736000 sudo find                           | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-736000 sudo crio                           | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-736000                                     | cilium-736000             | jenkins | v1.34.0 | 11 Dec 24 15:32 PST | 11 Dec 24 15:32 PST |
	| start   | -p kubernetes-upgrade-476000                         | kubernetes-upgrade-476000 | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-356000                             | offline-docker-356000     | jenkins | v1.34.0 | 11 Dec 24 15:32 PST | 11 Dec 24 15:32 PST |
	| start   | -p stopped-upgrade-684000                            | minikube                  | jenkins | v1.26.0 | 11 Dec 24 15:32 PST | 11 Dec 24 15:33 PST |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-476000                         | kubernetes-upgrade-476000 | jenkins | v1.34.0 | 11 Dec 24 15:32 PST | 11 Dec 24 15:32 PST |
	| start   | -p kubernetes-upgrade-476000                         | kubernetes-upgrade-476000 | jenkins | v1.34.0 | 11 Dec 24 15:32 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-476000                         | kubernetes-upgrade-476000 | jenkins | v1.34.0 | 11 Dec 24 15:32 PST | 11 Dec 24 15:32 PST |
	| start   | -p running-upgrade-031000                            | minikube                  | jenkins | v1.26.0 | 11 Dec 24 15:32 PST | 11 Dec 24 15:33 PST |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-684000 stop                          | minikube                  | jenkins | v1.26.0 | 11 Dec 24 15:33 PST | 11 Dec 24 15:33 PST |
	| start   | -p stopped-upgrade-684000                            | stopped-upgrade-684000    | jenkins | v1.34.0 | 11 Dec 24 15:33 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-031000                            | running-upgrade-031000    | jenkins | v1.34.0 | 11 Dec 24 15:33 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 15:33:53
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 15:33:53.876307    9127 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:33:53.876479    9127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:33:53.876483    9127 out.go:358] Setting ErrFile to fd 2...
	I1211 15:33:53.876485    9127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:33:53.876612    9127 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:33:53.877626    9127 out.go:352] Setting JSON to false
	I1211 15:33:53.896002    9127 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5603,"bootTime":1733954430,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:33:53.896080    9127 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:33:53.899625    9127 out.go:177] * [running-upgrade-031000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:33:53.909543    9127 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:33:53.909626    9127 notify.go:220] Checking for updates...
	I1211 15:33:53.917484    9127 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:33:53.921514    9127 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:33:53.922651    9127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:33:53.925532    9127 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:33:53.928507    9127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:33:53.931865    9127 config.go:182] Loaded profile config "running-upgrade-031000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1211 15:33:53.934488    9127 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1211 15:33:53.937540    9127 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:33:53.940528    9127 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:33:53.947453    9127 start.go:297] selected driver: qemu2
	I1211 15:33:53.947457    9127 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61515 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1211 15:33:53.947501    9127 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:33:53.949921    9127 cni.go:84] Creating CNI manager for ""
	I1211 15:33:53.949962    9127 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:33:53.949995    9127 start.go:340] cluster config:
	{Name:running-upgrade-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61515 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1211 15:33:53.950042    9127 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:33:53.958463    9127 out.go:177] * Starting "running-upgrade-031000" primary control-plane node in "running-upgrade-031000" cluster
	I1211 15:33:53.962559    9127 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1211 15:33:53.962572    9127 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1211 15:33:53.962575    9127 cache.go:56] Caching tarball of preloaded images
	I1211 15:33:53.962635    9127 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:33:53.962641    9127 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1211 15:33:53.962685    9127 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/config.json ...
	I1211 15:33:53.963049    9127 start.go:360] acquireMachinesLock for running-upgrade-031000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:34:04.393911    9116 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/config.json ...
	I1211 15:34:04.394163    9116 machine.go:93] provisionDockerMachine start ...
	I1211 15:34:04.394250    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:04.394400    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:04.394404    9116 main.go:141] libmachine: About to run SSH command:
	hostname
	I1211 15:34:04.458783    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1211 15:34:04.458812    9116 buildroot.go:166] provisioning hostname "stopped-upgrade-684000"
	I1211 15:34:04.458879    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:04.458998    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:04.459005    9116 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-684000 && echo "stopped-upgrade-684000" | sudo tee /etc/hostname
	I1211 15:34:04.527279    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-684000
	
	I1211 15:34:04.527350    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:04.527469    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:04.527477    9116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-684000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-684000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-684000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 15:34:04.593834    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 15:34:04.593850    9116 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20083-6627/.minikube CaCertPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20083-6627/.minikube}
	I1211 15:34:04.593870    9116 buildroot.go:174] setting up certificates
	I1211 15:34:04.593875    9116 provision.go:84] configureAuth start
	I1211 15:34:04.593902    9116 provision.go:143] copyHostCerts
	I1211 15:34:04.593996    9116 exec_runner.go:144] found /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.pem, removing ...
	I1211 15:34:04.594278    9116 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.pem
	I1211 15:34:04.594372    9116 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.pem (1078 bytes)
	I1211 15:34:04.594570    9116 exec_runner.go:144] found /Users/jenkins/minikube-integration/20083-6627/.minikube/cert.pem, removing ...
	I1211 15:34:04.594576    9116 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20083-6627/.minikube/cert.pem
	I1211 15:34:04.594632    9116 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20083-6627/.minikube/cert.pem (1123 bytes)
	I1211 15:34:04.594757    9116 exec_runner.go:144] found /Users/jenkins/minikube-integration/20083-6627/.minikube/key.pem, removing ...
	I1211 15:34:04.594768    9116 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20083-6627/.minikube/key.pem
	I1211 15:34:04.594813    9116 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20083-6627/.minikube/key.pem (1675 bytes)
	I1211 15:34:04.594908    9116 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-684000 san=[127.0.0.1 localhost minikube stopped-upgrade-684000]
	I1211 15:34:04.659090    9116 provision.go:177] copyRemoteCerts
	I1211 15:34:04.659202    9116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 15:34:04.659211    9116 sshutil.go:53] new ssh client: &{IP:localhost Port:61382 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/id_rsa Username:docker}
	I1211 15:34:04.694531    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1211 15:34:04.701259    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1211 15:34:04.708111    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 15:34:04.714585    9116 provision.go:87] duration metric: took 120.690916ms to configureAuth
	I1211 15:34:04.714593    9116 buildroot.go:189] setting minikube options for container-runtime
	I1211 15:34:04.714694    9116 config.go:182] Loaded profile config "stopped-upgrade-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1211 15:34:04.714747    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:04.714834    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:04.714839    9116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1211 15:34:04.778839    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1211 15:34:04.778850    9116 buildroot.go:70] root file system type: tmpfs
	I1211 15:34:04.778911    9116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1211 15:34:04.778972    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:04.779081    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:04.779115    9116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1211 15:34:04.846643    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1211 15:34:04.846714    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:04.846835    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:04.846845    9116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1211 15:34:05.188631    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1211 15:34:05.188644    9116 machine.go:96] duration metric: took 794.499666ms to provisionDockerMachine
	I1211 15:34:05.188652    9116 start.go:293] postStartSetup for "stopped-upgrade-684000" (driver="qemu2")
	I1211 15:34:05.188660    9116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 15:34:05.188744    9116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 15:34:05.188753    9116 sshutil.go:53] new ssh client: &{IP:localhost Port:61382 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/id_rsa Username:docker}
	I1211 15:34:05.224182    9116 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 15:34:05.225516    9116 info.go:137] Remote host: Buildroot 2021.02.12
	I1211 15:34:05.225524    9116 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20083-6627/.minikube/addons for local assets ...
	I1211 15:34:05.225594    9116 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20083-6627/.minikube/files for local assets ...
	I1211 15:34:05.225685    9116 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem -> 71352.pem in /etc/ssl/certs
	I1211 15:34:05.225790    9116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1211 15:34:05.228972    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem --> /etc/ssl/certs/71352.pem (1708 bytes)
	I1211 15:34:05.236754    9116 start.go:296] duration metric: took 48.095834ms for postStartSetup
	I1211 15:34:05.236773    9116 fix.go:56] duration metric: took 19.846305417s for fixHost
	I1211 15:34:05.236831    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.236938    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:05.236942    9116 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1211 15:34:05.299124    9127 start.go:364] duration metric: took 11.336406334s to acquireMachinesLock for "running-upgrade-031000"
	I1211 15:34:05.299147    9127 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:34:05.299154    9127 fix.go:54] fixHost starting: 
	I1211 15:34:05.299891    9127 fix.go:112] recreateIfNeeded on running-upgrade-031000: state=Running err=<nil>
	W1211 15:34:05.299901    9127 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:34:05.303377    9127 out.go:177] * Updating the running qemu2 "running-upgrade-031000" VM ...
	I1211 15:34:05.298986    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733960045.172601879
	
	I1211 15:34:05.298996    9116 fix.go:216] guest clock: 1733960045.172601879
	I1211 15:34:05.298999    9116 fix.go:229] Guest: 2024-12-11 15:34:05.172601879 -0800 PST Remote: 2024-12-11 15:34:05.236775 -0800 PST m=+20.058109126 (delta=-64.173121ms)
	I1211 15:34:05.299010    9116 fix.go:200] guest clock delta is within tolerance: -64.173121ms
	I1211 15:34:05.299012    9116 start.go:83] releasing machines lock for "stopped-upgrade-684000", held for 19.908554167s
	I1211 15:34:05.299092    9116 ssh_runner.go:195] Run: cat /version.json
	I1211 15:34:05.299103    9116 sshutil.go:53] new ssh client: &{IP:localhost Port:61382 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/id_rsa Username:docker}
	I1211 15:34:05.299092    9116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 15:34:05.299839    9116 sshutil.go:53] new ssh client: &{IP:localhost Port:61382 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/id_rsa Username:docker}
	W1211 15:34:05.331524    9116 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1211 15:34:05.331583    9116 ssh_runner.go:195] Run: systemctl --version
	I1211 15:34:05.377843    9116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 15:34:05.380028    9116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 15:34:05.380089    9116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1211 15:34:05.382930    9116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1211 15:34:05.388103    9116 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 15:34:05.388114    9116 start.go:495] detecting cgroup driver to use...
	I1211 15:34:05.388230    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 15:34:05.395344    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1211 15:34:05.398364    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1211 15:34:05.401715    9116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1211 15:34:05.401754    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1211 15:34:05.405326    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1211 15:34:05.408451    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1211 15:34:05.411170    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1211 15:34:05.414133    9116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 15:34:05.417366    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1211 15:34:05.420607    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1211 15:34:05.423845    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1211 15:34:05.426919    9116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 15:34:05.430136    9116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 15:34:05.433355    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:05.497501    9116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1211 15:34:05.508490    9116 start.go:495] detecting cgroup driver to use...
	I1211 15:34:05.508614    9116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1211 15:34:05.514157    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 15:34:05.518904    9116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 15:34:05.525580    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 15:34:05.530511    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1211 15:34:05.535081    9116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1211 15:34:05.593741    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1211 15:34:05.598928    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 15:34:05.604463    9116 ssh_runner.go:195] Run: which cri-dockerd
	I1211 15:34:05.605676    9116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1211 15:34:05.608946    9116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1211 15:34:05.614074    9116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1211 15:34:05.682149    9116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1211 15:34:05.749474    9116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1211 15:34:05.749534    9116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1211 15:34:05.755375    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:05.818729    9116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1211 15:34:06.961526    9116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.14279425s)
	I1211 15:34:06.961672    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1211 15:34:06.968536    9116 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1211 15:34:06.976149    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1211 15:34:06.982510    9116 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1211 15:34:07.049256    9116 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1211 15:34:07.114784    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:07.176383    9116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1211 15:34:07.182504    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1211 15:34:07.187509    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:07.246501    9116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1211 15:34:07.285867    9116 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1211 15:34:07.285975    9116 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1211 15:34:07.287899    9116 start.go:563] Will wait 60s for crictl version
	I1211 15:34:07.287939    9116 ssh_runner.go:195] Run: which crictl
	I1211 15:34:07.289299    9116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 15:34:07.304979    9116 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1211 15:34:07.305057    9116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1211 15:34:07.321518    9116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1211 15:34:05.311294    9127 machine.go:93] provisionDockerMachine start ...
	I1211 15:34:05.311396    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.311535    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.311540    9127 main.go:141] libmachine: About to run SSH command:
	hostname
	I1211 15:34:05.385465    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-031000
	
	I1211 15:34:05.385483    9127 buildroot.go:166] provisioning hostname "running-upgrade-031000"
	I1211 15:34:05.385533    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.385655    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.385662    9127 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-031000 && echo "running-upgrade-031000" | sudo tee /etc/hostname
	I1211 15:34:05.463277    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-031000
	
	I1211 15:34:05.463365    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.463500    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.463509    9127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-031000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-031000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-031000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 15:34:05.539562    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 15:34:05.539575    9127 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20083-6627/.minikube CaCertPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20083-6627/.minikube}
	I1211 15:34:05.539583    9127 buildroot.go:174] setting up certificates
	I1211 15:34:05.539601    9127 provision.go:84] configureAuth start
	I1211 15:34:05.539615    9127 provision.go:143] copyHostCerts
	I1211 15:34:05.539681    9127 exec_runner.go:144] found /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.pem, removing ...
	I1211 15:34:05.539689    9127 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.pem
	I1211 15:34:05.539807    9127 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.pem (1078 bytes)
	I1211 15:34:05.539998    9127 exec_runner.go:144] found /Users/jenkins/minikube-integration/20083-6627/.minikube/cert.pem, removing ...
	I1211 15:34:05.540002    9127 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20083-6627/.minikube/cert.pem
	I1211 15:34:05.540046    9127 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20083-6627/.minikube/cert.pem (1123 bytes)
	I1211 15:34:05.540160    9127 exec_runner.go:144] found /Users/jenkins/minikube-integration/20083-6627/.minikube/key.pem, removing ...
	I1211 15:34:05.540163    9127 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20083-6627/.minikube/key.pem
	I1211 15:34:05.540205    9127 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20083-6627/.minikube/key.pem (1675 bytes)
	I1211 15:34:05.540301    9127 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-031000 san=[127.0.0.1 localhost minikube running-upgrade-031000]
	I1211 15:34:05.575873    9127 provision.go:177] copyRemoteCerts
	I1211 15:34:05.575942    9127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 15:34:05.575954    9127 sshutil.go:53] new ssh client: &{IP:localhost Port:61422 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/id_rsa Username:docker}
	I1211 15:34:05.615112    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1211 15:34:05.622459    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1211 15:34:05.629640    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 15:34:05.636579    9127 provision.go:87] duration metric: took 96.965167ms to configureAuth
	I1211 15:34:05.636588    9127 buildroot.go:189] setting minikube options for container-runtime
	I1211 15:34:05.636696    9127 config.go:182] Loaded profile config "running-upgrade-031000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1211 15:34:05.636751    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.636842    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.636847    9127 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1211 15:34:05.710250    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1211 15:34:05.710262    9127 buildroot.go:70] root file system type: tmpfs
	I1211 15:34:05.710323    9127 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1211 15:34:05.710400    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.710523    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.710559    9127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1211 15:34:05.789720    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1211 15:34:05.789791    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.789902    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.789910    9127 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1211 15:34:05.882638    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 15:34:05.882651    9127 machine.go:96] duration metric: took 571.36825ms to provisionDockerMachine
	I1211 15:34:05.882658    9127 start.go:293] postStartSetup for "running-upgrade-031000" (driver="qemu2")
	I1211 15:34:05.882667    9127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 15:34:05.882734    9127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 15:34:05.882744    9127 sshutil.go:53] new ssh client: &{IP:localhost Port:61422 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/id_rsa Username:docker}
	I1211 15:34:05.924954    9127 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 15:34:05.926564    9127 info.go:137] Remote host: Buildroot 2021.02.12
	I1211 15:34:05.926571    9127 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20083-6627/.minikube/addons for local assets ...
	I1211 15:34:05.926653    9127 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20083-6627/.minikube/files for local assets ...
	I1211 15:34:05.926743    9127 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem -> 71352.pem in /etc/ssl/certs
	I1211 15:34:05.926847    9127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1211 15:34:05.929825    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem --> /etc/ssl/certs/71352.pem (1708 bytes)
	I1211 15:34:05.936800    9127 start.go:296] duration metric: took 54.138333ms for postStartSetup
	I1211 15:34:05.936813    9127 fix.go:56] duration metric: took 637.681875ms for fixHost
	I1211 15:34:05.936856    9127 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.936957    9127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a5f1b0] 0x102a619f0 <nil>  [] 0s} localhost 61422 <nil> <nil>}
	I1211 15:34:05.936961    9127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1211 15:34:06.008288    9127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733960045.939916048
	
	I1211 15:34:06.008297    9127 fix.go:216] guest clock: 1733960045.939916048
	I1211 15:34:06.008301    9127 fix.go:229] Guest: 2024-12-11 15:34:05.939916048 -0800 PST Remote: 2024-12-11 15:34:05.936814 -0800 PST m=+12.085597709 (delta=3.102048ms)
	I1211 15:34:06.008314    9127 fix.go:200] guest clock delta is within tolerance: 3.102048ms
	I1211 15:34:06.008317    9127 start.go:83] releasing machines lock for "running-upgrade-031000", held for 709.200709ms
	I1211 15:34:06.008387    9127 ssh_runner.go:195] Run: cat /version.json
	I1211 15:34:06.008396    9127 sshutil.go:53] new ssh client: &{IP:localhost Port:61422 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/id_rsa Username:docker}
	I1211 15:34:06.008387    9127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 15:34:06.008432    9127 sshutil.go:53] new ssh client: &{IP:localhost Port:61422 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/id_rsa Username:docker}
	W1211 15:34:06.008912    9127 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:61660->127.0.0.1:61422: read: connection reset by peer
	I1211 15:34:06.008926    9127 retry.go:31] will retry after 223.672014ms: ssh: handshake failed: read tcp 127.0.0.1:61660->127.0.0.1:61422: read: connection reset by peer
	W1211 15:34:06.276633    9127 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1211 15:34:06.276706    9127 ssh_runner.go:195] Run: systemctl --version
	I1211 15:34:06.281581    9127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 15:34:06.283309    9127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 15:34:06.283354    9127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1211 15:34:06.287621    9127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1211 15:34:06.306921    9127 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 15:34:06.306940    9127 start.go:495] detecting cgroup driver to use...
	I1211 15:34:06.307004    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 15:34:06.317926    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1211 15:34:06.325573    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1211 15:34:06.344372    9127 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1211 15:34:06.344449    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1211 15:34:06.347463    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1211 15:34:06.350513    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1211 15:34:06.353664    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1211 15:34:06.358088    9127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 15:34:06.365139    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1211 15:34:06.368344    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1211 15:34:06.374010    9127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1211 15:34:06.377095    9127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 15:34:06.381766    9127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 15:34:06.386302    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:06.512009    9127 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1211 15:34:06.533413    9127 start.go:495] detecting cgroup driver to use...
	I1211 15:34:06.533497    9127 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1211 15:34:06.541438    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 15:34:06.547189    9127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 15:34:06.555669    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 15:34:06.572062    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1211 15:34:06.583794    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 15:34:06.594364    9127 ssh_runner.go:195] Run: which cri-dockerd
	I1211 15:34:06.595593    9127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1211 15:34:06.598149    9127 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1211 15:34:06.603022    9127 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1211 15:34:06.709547    9127 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1211 15:34:06.820002    9127 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1211 15:34:06.820065    9127 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1211 15:34:06.825247    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:06.926786    9127 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1211 15:34:07.341362    9116 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1211 15:34:07.341518    9116 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1211 15:34:07.342765    9116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 15:34:07.346715    9116 kubeadm.go:883] updating cluster {Name:stopped-upgrade-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61417 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1211 15:34:07.346760    9116 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1211 15:34:07.346811    9116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1211 15:34:07.357021    9116 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1211 15:34:07.357030    9116 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1211 15:34:07.357089    9116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1211 15:34:07.360205    9116 ssh_runner.go:195] Run: which lz4
	I1211 15:34:07.361448    9116 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 15:34:07.362571    9116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 15:34:07.362581    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1211 15:34:08.341712    9116 docker.go:653] duration metric: took 980.334708ms to copy over tarball
	I1211 15:34:08.341787    9116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 15:34:09.523429    9116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.1816645s)
	I1211 15:34:09.523443    9116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 15:34:09.539029    9116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1211 15:34:09.542288    9116 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1211 15:34:09.547447    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:09.612806    9116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1211 15:34:11.203263    9116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.590485292s)
	I1211 15:34:11.203364    9116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1211 15:34:11.218754    9116 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1211 15:34:11.218764    9116 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1211 15:34:11.218769    9116 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1211 15:34:11.226436    9116 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1211 15:34:11.227742    9116 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:11.229395    9116 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:11.230585    9116 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:11.230636    9116 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1211 15:34:11.230740    9116 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:11.232733    9116 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:11.232751    9116 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:11.234430    9116 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:11.234433    9116 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:11.235574    9116 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:11.235976    9116 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:11.236987    9116 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:11.237081    9116 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:11.237951    9116 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:11.238651    9116 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:11.687928    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1211 15:34:11.700774    9116 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1211 15:34:11.700991    9116 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1211 15:34:11.701044    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1211 15:34:11.712041    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1211 15:34:11.712204    9116 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1211 15:34:11.713832    9116 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1211 15:34:11.713848    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1211 15:34:11.725815    9116 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1211 15:34:11.725834    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1211 15:34:11.729973    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:11.753116    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:11.771078    9116 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1211 15:34:11.771140    9116 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1211 15:34:11.771157    9116 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:11.771225    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:11.772900    9116 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1211 15:34:11.772921    9116 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:11.772971    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:11.784862    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1211 15:34:11.784896    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1211 15:34:11.800271    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:11.811634    9116 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1211 15:34:11.811661    9116 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:11.811738    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:11.822526    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1211 15:34:11.857322    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:11.868091    9116 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1211 15:34:11.868121    9116 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:11.868189    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:11.878160    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1211 15:34:11.936863    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:11.948230    9116 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1211 15:34:11.948250    9116 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:11.948320    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:11.959052    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W1211 15:34:12.002414    9116 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1211 15:34:12.002569    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:12.015555    9116 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1211 15:34:12.015576    9116 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:12.015637    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:12.025188    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1211 15:34:12.025329    9116 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1211 15:34:12.026853    9116 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1211 15:34:12.026865    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1211 15:34:12.071184    9116 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1211 15:34:12.071197    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1211 15:34:12.108622    9116 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1211 15:34:12.722730    9116 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1211 15:34:12.722897    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:12.738173    9116 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1211 15:34:12.738204    9116 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:12.738272    9116 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:12.754776    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1211 15:34:12.754929    9116 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1211 15:34:12.756346    9116 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1211 15:34:12.756364    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1211 15:34:12.788395    9116 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1211 15:34:12.788410    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1211 15:34:13.025514    9116 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1211 15:34:13.025562    9116 cache_images.go:92] duration metric: took 1.806841083s to LoadCachedImages
	W1211 15:34:13.025797    9116 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1211 15:34:13.025806    9116 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1211 15:34:13.025995    9116 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-684000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 15:34:13.026073    9116 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1211 15:34:13.044144    9116 cni.go:84] Creating CNI manager for ""
	I1211 15:34:13.044160    9116 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:34:13.044390    9116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1211 15:34:13.044404    9116 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-684000 NodeName:stopped-upgrade-684000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 15:34:13.044483    9116 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-684000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 15:34:13.044558    9116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1211 15:34:13.047949    9116 binaries.go:44] Found k8s binaries, skipping transfer
	I1211 15:34:13.048021    9116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 15:34:13.051326    9116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1211 15:34:13.057207    9116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 15:34:13.063172    9116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1211 15:34:13.069609    9116 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1211 15:34:13.071122    9116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 15:34:13.075036    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:13.137810    9116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 15:34:13.148471    9116 certs.go:68] Setting up /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000 for IP: 10.0.2.15
	I1211 15:34:13.148480    9116 certs.go:194] generating shared ca certs ...
	I1211 15:34:13.148490    9116 certs.go:226] acquiring lock for ca certs: {Name:mk9a2f9aee3b15a0ae3e213800d46f88db78207a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:13.148877    9116 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.key
	I1211 15:34:13.148989    9116 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/proxy-client-ca.key
	I1211 15:34:13.149119    9116 certs.go:256] generating profile certs ...
	I1211 15:34:13.149280    9116 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/client.key
	I1211 15:34:13.149294    9116 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.key.f50424f9
	I1211 15:34:13.149305    9116 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.crt.f50424f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1211 15:34:13.260791    9116 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.crt.f50424f9 ...
	I1211 15:34:13.260830    9116 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.crt.f50424f9: {Name:mk1cc3a9ab509aafe3dba5606719792a1c165d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:13.261415    9116 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.key.f50424f9 ...
	I1211 15:34:13.261421    9116 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.key.f50424f9: {Name:mk906a74d2dc360661e7ccf4c6ed3103ec30a937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:13.261604    9116 certs.go:381] copying /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.crt.f50424f9 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.crt
	I1211 15:34:13.261727    9116 certs.go:385] copying /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.key.f50424f9 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.key
	I1211 15:34:13.261978    9116 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/proxy-client.key
	I1211 15:34:13.262155    9116 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/7135.pem (1338 bytes)
	W1211 15:34:13.262341    9116 certs.go:480] ignoring /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/7135_empty.pem, impossibly tiny 0 bytes
	I1211 15:34:13.262348    9116 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 15:34:13.262369    9116 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem (1078 bytes)
	I1211 15:34:13.262387    9116 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem (1123 bytes)
	I1211 15:34:13.262405    9116 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/key.pem (1675 bytes)
	I1211 15:34:13.262442    9116 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem (1708 bytes)
	I1211 15:34:13.263788    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 15:34:13.270610    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 15:34:13.277577    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 15:34:13.285122    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1211 15:34:13.292563    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1211 15:34:13.299683    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 15:34:13.306107    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 15:34:13.313190    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1211 15:34:13.320358    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem --> /usr/share/ca-certificates/71352.pem (1708 bytes)
	I1211 15:34:13.326736    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 15:34:13.333599    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/7135.pem --> /usr/share/ca-certificates/7135.pem (1338 bytes)
	I1211 15:34:13.340942    9116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 15:34:13.346437    9116 ssh_runner.go:195] Run: openssl version
	I1211 15:34:13.348297    9116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71352.pem && ln -fs /usr/share/ca-certificates/71352.pem /etc/ssl/certs/71352.pem"
	I1211 15:34:13.351298    9116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71352.pem
	I1211 15:34:13.352614    9116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:22 /usr/share/ca-certificates/71352.pem
	I1211 15:34:13.352638    9116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71352.pem
	I1211 15:34:13.354393    9116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71352.pem /etc/ssl/certs/3ec20f2e.0"
	I1211 15:34:13.357386    9116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1211 15:34:13.360513    9116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 15:34:13.361939    9116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I1211 15:34:13.361967    9116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 15:34:13.363838    9116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1211 15:34:13.366623    9116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7135.pem && ln -fs /usr/share/ca-certificates/7135.pem /etc/ssl/certs/7135.pem"
	I1211 15:34:13.369893    9116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7135.pem
	I1211 15:34:13.371377    9116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:22 /usr/share/ca-certificates/7135.pem
	I1211 15:34:13.371402    9116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7135.pem
	I1211 15:34:13.373081    9116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7135.pem /etc/ssl/certs/51391683.0"
	I1211 15:34:13.376423    9116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 15:34:13.377980    9116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 15:34:13.379963    9116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 15:34:13.381771    9116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 15:34:13.383681    9116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 15:34:13.385483    9116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 15:34:13.387290    9116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 15:34:13.389202    9116 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61417 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1211 15:34:13.389282    9116 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1211 15:34:13.399433    9116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 15:34:13.402431    9116 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1211 15:34:13.402436    9116 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1211 15:34:13.402467    9116 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 15:34:13.405672    9116 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 15:34:13.405906    9116 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-684000" does not appear in /Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:34:13.405925    9116 kubeconfig.go:62] /Users/jenkins/minikube-integration/20083-6627/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-684000" cluster setting kubeconfig missing "stopped-upgrade-684000" context setting]
	I1211 15:34:13.406116    9116 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/kubeconfig: {Name:mkbb4a262cd8684046b6244fd6ca1d80f2c17ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:13.407971    9116 kapi.go:59] client config for stopped-upgrade-684000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/client.key", CAFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065580b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 15:34:13.413321    9116 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 15:34:13.416102    9116 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-684000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1211 15:34:13.416111    9116 kubeadm.go:1160] stopping kube-system containers ...
	I1211 15:34:13.416176    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1211 15:34:13.426629    9116 docker.go:483] Stopping containers: [75ea3383cdcb f6ac5f0dd06f a36cfd33e9ad b21ec5886c57 ce6d2e2ea14f 42fa55656c01 fce2dc366bd4 081582cc5331]
	I1211 15:34:13.426703    9116 ssh_runner.go:195] Run: docker stop 75ea3383cdcb f6ac5f0dd06f a36cfd33e9ad b21ec5886c57 ce6d2e2ea14f 42fa55656c01 fce2dc366bd4 081582cc5331
	I1211 15:34:13.437182    9116 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1211 15:34:13.443057    9116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 15:34:13.445924    9116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 15:34:13.445932    9116 kubeadm.go:157] found existing configuration files:
	
	I1211 15:34:13.445963    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/admin.conf
	I1211 15:34:13.448833    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 15:34:13.448862    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 15:34:13.451329    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/kubelet.conf
	I1211 15:34:13.453928    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 15:34:13.453964    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 15:34:13.456942    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/controller-manager.conf
	I1211 15:34:13.459617    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 15:34:13.459651    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 15:34:13.462156    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/scheduler.conf
	I1211 15:34:13.464949    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 15:34:13.464970    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 15:34:13.467672    9116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 15:34:13.470275    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:13.493880    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:13.925918    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:14.039704    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:14.070382    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:14.092621    9116 api_server.go:52] waiting for apiserver process to appear ...
	I1211 15:34:14.092709    9116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:34:14.594894    9116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:34:15.094758    9116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:34:15.111206    9116 api_server.go:72] duration metric: took 1.018616875s to wait for apiserver process to appear ...
	I1211 15:34:15.111220    9116 api_server.go:88] waiting for apiserver healthz status ...
	I1211 15:34:15.111230    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:20.114526    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:20.114637    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:23.249791    9127 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.323491167s)
	I1211 15:34:23.249869    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1211 15:34:23.254818    9127 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1211 15:34:23.262261    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1211 15:34:23.267411    9127 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1211 15:34:23.353075    9127 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1211 15:34:23.444963    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:23.536577    9127 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1211 15:34:23.543010    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1211 15:34:23.547387    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:23.641952    9127 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1211 15:34:23.682232    9127 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1211 15:34:23.682327    9127 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1211 15:34:23.684274    9127 start.go:563] Will wait 60s for crictl version
	I1211 15:34:23.684341    9127 ssh_runner.go:195] Run: which crictl
	I1211 15:34:23.685912    9127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 15:34:23.697909    9127 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1211 15:34:23.697995    9127 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1211 15:34:23.710591    9127 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1211 15:34:23.728589    9127 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1211 15:34:23.728749    9127 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1211 15:34:23.730203    9127 kubeadm.go:883] updating cluster {Name:running-upgrade-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61515 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1211 15:34:23.730246    9127 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1211 15:34:23.730296    9127 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1211 15:34:23.740974    9127 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1211 15:34:23.740982    9127 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1211 15:34:23.741041    9127 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1211 15:34:23.744096    9127 ssh_runner.go:195] Run: which lz4
	I1211 15:34:23.745506    9127 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 15:34:23.746800    9127 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 15:34:23.746809    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1211 15:34:25.115653    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:25.115674    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:24.737503    9127 docker.go:653] duration metric: took 992.084083ms to copy over tarball
	I1211 15:34:24.737578    9127 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 15:34:25.947140    9127 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.209586083s)
	I1211 15:34:25.947157    9127 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 15:34:25.964249    9127 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1211 15:34:25.967828    9127 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1211 15:34:25.972922    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:26.059020    9127 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1211 15:34:27.251243    9127 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.192237875s)
	I1211 15:34:27.251340    9127 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1211 15:34:27.264476    9127 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1211 15:34:27.264494    9127 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1211 15:34:27.264502    9127 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1211 15:34:27.268957    9127 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:27.271925    9127 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:27.274782    9127 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:27.274860    9127 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:27.277085    9127 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:27.277101    9127 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:27.278459    9127 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:27.279180    9127 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:27.280231    9127 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:27.280513    9127 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:27.281629    9127 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1211 15:34:27.282216    9127 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:27.283137    9127 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:27.283196    9127 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:27.284212    9127 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1211 15:34:27.285119    9127 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:27.869286    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:27.874704    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:27.876583    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:27.884294    9127 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1211 15:34:27.884331    9127 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:27.884380    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:27.896136    9127 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1211 15:34:27.896165    9127 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:27.896207    9127 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1211 15:34:27.896242    9127 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:27.896249    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:27.896276    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:27.904749    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1211 15:34:27.916650    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1211 15:34:27.916672    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1211 15:34:27.962800    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:27.974127    9127 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1211 15:34:27.974149    9127 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:27.974212    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:27.977032    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:27.987504    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1211 15:34:27.992008    9127 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1211 15:34:27.992029    9127 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:27.992082    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:28.002640    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1211 15:34:28.052306    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1211 15:34:28.062993    9127 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1211 15:34:28.063012    9127 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1211 15:34:28.063073    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1211 15:34:28.073442    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1211 15:34:28.073578    9127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1211 15:34:28.075378    9127 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1211 15:34:28.075391    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1211 15:34:28.083948    9127 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1211 15:34:28.083959    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1211 15:34:28.111525    9127 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1211 15:34:28.150601    9127 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1211 15:34:28.150758    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:28.161839    9127 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1211 15:34:28.161864    9127 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:28.161926    9127 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:28.173740    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1211 15:34:28.173868    9127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1211 15:34:28.175773    9127 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1211 15:34:28.175784    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1211 15:34:28.222106    9127 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1211 15:34:28.222127    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W1211 15:34:28.245584    9127 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1211 15:34:28.245858    9127 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:28.272548    9127 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1211 15:34:28.272598    9127 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1211 15:34:28.272620    9127 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:28.272676    9127 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:30.116393    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:30.116410    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:29.174200    9127 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1211 15:34:29.174474    9127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1211 15:34:29.178542    9127 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1211 15:34:29.178594    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1211 15:34:29.228223    9127 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1211 15:34:29.228239    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1211 15:34:29.467625    9127 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1211 15:34:29.467663    9127 cache_images.go:92] duration metric: took 2.203221917s to LoadCachedImages
	W1211 15:34:29.467752    9127 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1211 15:34:29.467760    9127 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1211 15:34:29.467820    9127 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-031000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 15:34:29.467907    9127 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1211 15:34:29.481696    9127 cni.go:84] Creating CNI manager for ""
	I1211 15:34:29.481714    9127 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:34:29.481726    9127 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1211 15:34:29.481741    9127 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-031000 NodeName:running-upgrade-031000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 15:34:29.481824    9127 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-031000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 15:34:29.481893    9127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1211 15:34:29.485383    9127 binaries.go:44] Found k8s binaries, skipping transfer
	I1211 15:34:29.485420    9127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 15:34:29.488303    9127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1211 15:34:29.493283    9127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 15:34:29.498003    9127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1211 15:34:29.503680    9127 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1211 15:34:29.505425    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:29.592964    9127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 15:34:29.598609    9127 certs.go:68] Setting up /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000 for IP: 10.0.2.15
	I1211 15:34:29.598616    9127 certs.go:194] generating shared ca certs ...
	I1211 15:34:29.598625    9127 certs.go:226] acquiring lock for ca certs: {Name:mk9a2f9aee3b15a0ae3e213800d46f88db78207a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:29.598777    9127 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.key
	I1211 15:34:29.599100    9127 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/proxy-client-ca.key
	I1211 15:34:29.599106    9127 certs.go:256] generating profile certs ...
	I1211 15:34:29.599400    9127 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/client.key
	I1211 15:34:29.599418    9127 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.key.d73f31b6
	I1211 15:34:29.599427    9127 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.crt.d73f31b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1211 15:34:29.681554    9127 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.crt.d73f31b6 ...
	I1211 15:34:29.681565    9127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.crt.d73f31b6: {Name:mk94e27a7067bfbb2a635ef1c0f7e2a4c01f2256 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:29.681834    9127 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.key.d73f31b6 ...
	I1211 15:34:29.681839    9127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.key.d73f31b6: {Name:mk0a7a9ea9bc2778f3cc6c528fcf72f51e126b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:29.681990    9127 certs.go:381] copying /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.crt.d73f31b6 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.crt
	I1211 15:34:29.682110    9127 certs.go:385] copying /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.key.d73f31b6 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.key
	I1211 15:34:29.682439    9127 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/proxy-client.key
	I1211 15:34:29.682613    9127 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/7135.pem (1338 bytes)
	W1211 15:34:29.682802    9127 certs.go:480] ignoring /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/7135_empty.pem, impossibly tiny 0 bytes
	I1211 15:34:29.682808    9127 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 15:34:29.682977    9127 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem (1078 bytes)
	I1211 15:34:29.683159    9127 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem (1123 bytes)
	I1211 15:34:29.683352    9127 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/key.pem (1675 bytes)
	I1211 15:34:29.683524    9127 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem (1708 bytes)
	I1211 15:34:29.685551    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 15:34:29.693528    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 15:34:29.701235    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 15:34:29.710307    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1211 15:34:29.718527    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1211 15:34:29.725589    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1211 15:34:29.732852    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 15:34:29.739851    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1211 15:34:29.746529    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 15:34:29.754082    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/7135.pem --> /usr/share/ca-certificates/7135.pem (1338 bytes)
	I1211 15:34:29.761253    9127 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem --> /usr/share/ca-certificates/71352.pem (1708 bytes)
	I1211 15:34:29.768507    9127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 15:34:29.773712    9127 ssh_runner.go:195] Run: openssl version
	I1211 15:34:29.775527    9127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71352.pem && ln -fs /usr/share/ca-certificates/71352.pem /etc/ssl/certs/71352.pem"
	I1211 15:34:29.779566    9127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71352.pem
	I1211 15:34:29.781302    9127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:22 /usr/share/ca-certificates/71352.pem
	I1211 15:34:29.781337    9127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71352.pem
	I1211 15:34:29.783372    9127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71352.pem /etc/ssl/certs/3ec20f2e.0"
	I1211 15:34:29.786141    9127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1211 15:34:29.789338    9127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 15:34:29.790843    9127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I1211 15:34:29.790870    9127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 15:34:29.792974    9127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1211 15:34:29.795575    9127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7135.pem && ln -fs /usr/share/ca-certificates/7135.pem /etc/ssl/certs/7135.pem"
	I1211 15:34:29.798925    9127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7135.pem
	I1211 15:34:29.800485    9127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:22 /usr/share/ca-certificates/7135.pem
	I1211 15:34:29.800515    9127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7135.pem
	I1211 15:34:29.802460    9127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7135.pem /etc/ssl/certs/51391683.0"
	I1211 15:34:29.805686    9127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 15:34:29.807313    9127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 15:34:29.809381    9127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 15:34:29.811396    9127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 15:34:29.813739    9127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 15:34:29.815995    9127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 15:34:29.817621    9127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 15:34:29.819357    9127 kubeadm.go:392] StartCluster: {Name:running-upgrade-031000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61515 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1211 15:34:29.819430    9127 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1211 15:34:29.836773    9127 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 15:34:29.840025    9127 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1211 15:34:29.840038    9127 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1211 15:34:29.840073    9127 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 15:34:29.843061    9127 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 15:34:29.843559    9127 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-031000" does not appear in /Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:34:29.843672    9127 kubeconfig.go:62] /Users/jenkins/minikube-integration/20083-6627/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-031000" cluster setting kubeconfig missing "running-upgrade-031000" context setting]
	I1211 15:34:29.843863    9127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/kubeconfig: {Name:mkbb4a262cd8684046b6244fd6ca1d80f2c17ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:29.844298    9127 kapi.go:59] client config for running-upgrade-031000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/client.key", CAFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1044bc0b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 15:34:29.844787    9127 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 15:34:29.848054    9127 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-031000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1211 15:34:29.848060    9127 kubeadm.go:1160] stopping kube-system containers ...
	I1211 15:34:29.848113    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1211 15:34:29.859366    9127 docker.go:483] Stopping containers: [6085b0e488b0 4b80b10abc15 1140a38c8ff2 deb6c6e8ccd5 54bb8dab6d62 c12e2ab1ed1d 14d75f9b9c9d a07f1fe8059c d34888fb8fe2 9156d239f005 a954fb185965 ebd7105b237d c6f7cfc4bc17 6be8bf310db2 6f0113ec40f2 1588ec1e49c6 eb06ed70196d 95038533cd6f]
	I1211 15:34:29.859446    9127 ssh_runner.go:195] Run: docker stop 6085b0e488b0 4b80b10abc15 1140a38c8ff2 deb6c6e8ccd5 54bb8dab6d62 c12e2ab1ed1d 14d75f9b9c9d a07f1fe8059c d34888fb8fe2 9156d239f005 a954fb185965 ebd7105b237d c6f7cfc4bc17 6be8bf310db2 6f0113ec40f2 1588ec1e49c6 eb06ed70196d 95038533cd6f
	I1211 15:34:29.871631    9127 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1211 15:34:29.964407    9127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 15:34:29.968304    9127 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Dec 11 23:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Dec 11 23:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec 11 23:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Dec 11 23:33 /etc/kubernetes/scheduler.conf
	
	I1211 15:34:29.968350    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/admin.conf
	I1211 15:34:29.971148    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 15:34:29.971184    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 15:34:29.974386    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/kubelet.conf
	I1211 15:34:29.977605    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 15:34:29.977640    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 15:34:29.980678    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/controller-manager.conf
	I1211 15:34:29.983440    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 15:34:29.983467    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 15:34:29.986561    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/scheduler.conf
	I1211 15:34:29.989681    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 15:34:29.989717    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 15:34:29.992329    9127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 15:34:29.995051    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:30.017364    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:30.491793    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:30.876377    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:30.902616    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:30.926655    9127 api_server.go:52] waiting for apiserver process to appear ...
	I1211 15:34:30.926746    9127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:34:31.426901    9127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:34:31.928828    9127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:34:31.933842    9127 api_server.go:72] duration metric: took 1.007221709s to wait for apiserver process to appear ...
	I1211 15:34:31.933853    9127 api_server.go:88] waiting for apiserver healthz status ...
	I1211 15:34:31.933863    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:35.117343    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:35.117393    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:36.933910    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:36.933950    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:40.118854    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:40.118891    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:41.935653    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:41.935697    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:45.120634    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:45.120660    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:46.935897    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:46.935925    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:50.122746    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:50.122792    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:51.936092    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:51.936137    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:55.123516    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:55.123563    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:56.936625    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:56.936697    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:00.125885    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:00.125982    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:01.937336    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:01.937390    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:05.128405    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:05.128442    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:06.938265    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:06.938337    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:10.130656    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:10.130752    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:11.939404    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:11.939446    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:15.132004    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:15.133079    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:15.150368    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:35:15.150469    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:15.163213    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:35:15.163303    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:15.174265    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:35:15.174344    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:15.188953    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:35:15.189040    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:15.199578    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:35:15.199663    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:15.210840    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:35:15.210914    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:15.221384    9116 logs.go:282] 0 containers: []
	W1211 15:35:15.221395    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:15.221461    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:15.231848    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:35:15.231875    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:15.231880    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:15.269455    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:35:15.269465    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:35:15.283049    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:35:15.283065    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:35:16.940105    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:16.940147    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:15.301539    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:35:15.301552    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:35:15.312816    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:15.312829    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:15.317000    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:15.317007    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:15.425445    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:35:15.425459    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:35:15.457226    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:35:15.457239    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:15.468937    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:35:15.468949    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:35:15.482858    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:35:15.482869    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:35:15.500307    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:35:15.500318    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:35:15.511642    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:35:15.511655    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:35:15.526798    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:35:15.526808    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:35:15.538049    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:35:15.538059    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:35:15.549143    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:35:15.549153    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:35:15.567057    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:35:15.567069    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:35:15.582101    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:15.582112    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:18.107782    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:21.941799    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:21.941860    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:23.110038    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:23.110550    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:23.149486    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:35:23.149650    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:23.170510    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:35:23.170647    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:23.188106    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:35:23.188207    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:23.200531    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:35:23.200625    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:23.214403    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:35:23.214477    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:23.225123    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:35:23.225210    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:23.235821    9116 logs.go:282] 0 containers: []
	W1211 15:35:23.235835    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:23.235911    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:23.246465    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:35:23.246485    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:35:23.246490    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:35:23.261373    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:35:23.261382    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:35:23.286513    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:35:23.286525    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:35:23.303759    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:35:23.303771    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:35:23.314923    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:35:23.314934    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:35:23.326713    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:35:23.326725    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:35:23.343992    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:35:23.344003    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:35:23.355680    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:23.355694    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:23.392958    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:23.392968    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:23.435441    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:35:23.435452    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:35:23.447390    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:35:23.447400    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:35:23.462331    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:23.462340    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:23.467611    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:35:23.467620    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:35:23.481258    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:35:23.481269    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:35:23.493928    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:23.493943    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:23.519680    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:35:23.519690    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:23.532410    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:35:23.532421    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:35:26.942686    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:26.942746    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:26.048119    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:31.944973    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:31.945176    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:31.962585    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:35:31.962680    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:31.974403    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:35:31.974481    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:31.984938    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:35:31.985014    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:31.995800    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:35:31.995863    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:32.007255    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:35:32.007328    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:32.018297    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:35:32.018375    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:32.028058    9127 logs.go:282] 0 containers: []
	W1211 15:35:32.028072    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:32.028143    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:32.038295    9127 logs.go:282] 0 containers: []
	W1211 15:35:32.038307    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:35:32.038312    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:32.038318    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:32.137601    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:35:32.137614    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:35:32.152051    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:35:32.152062    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:35:32.164071    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:32.164082    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:32.191742    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:35:32.191754    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:35:32.206511    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:35:32.206522    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:35:32.218433    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:35:32.218444    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:35:32.236538    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:35:32.236549    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:35:32.247637    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:35:32.247651    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:35:32.264794    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:35:32.264804    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:35:32.278701    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:32.278714    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:32.317579    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:35:32.317587    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:35:32.329268    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:35:32.329278    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:35:32.340905    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:35:32.340917    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:32.352441    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:32.352451    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:32.356947    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:35:32.356954    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:35:32.369723    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:35:32.369733    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:35:31.050413    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:31.050710    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:31.075751    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:35:31.075870    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:31.092507    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:35:31.092603    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:31.111100    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:35:31.111183    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:31.121924    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:35:31.122017    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:31.131811    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:35:31.131891    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:31.142900    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:35:31.142977    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:31.154022    9116 logs.go:282] 0 containers: []
	W1211 15:35:31.154035    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:31.154103    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:31.164955    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:35:31.164973    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:35:31.164979    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:35:31.190511    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:35:31.190523    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:35:31.204003    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:35:31.204015    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:35:31.229128    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:35:31.229143    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:35:31.244247    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:31.244258    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:31.248766    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:31.248775    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:31.285491    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:35:31.285503    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:35:31.300930    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:35:31.300941    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:35:31.312633    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:35:31.312646    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:35:31.326863    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:35:31.326873    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:35:31.344311    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:35:31.344322    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:31.356731    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:31.356756    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:31.396114    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:35:31.396123    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:35:31.410121    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:35:31.410131    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:35:31.424493    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:35:31.424506    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:35:31.445953    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:35:31.445966    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:35:31.457040    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:31.457050    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:33.984375    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:34.885961    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:38.985100    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:38.985308    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:39.000952    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:35:39.001048    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:39.013682    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:35:39.013762    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:39.024889    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:35:39.024968    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:39.035725    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:35:39.035811    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:39.046412    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:35:39.046489    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:39.057238    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:35:39.057315    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:39.067012    9116 logs.go:282] 0 containers: []
	W1211 15:35:39.067025    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:39.067092    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:39.077778    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:35:39.077797    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:35:39.077803    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:35:39.091695    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:35:39.091710    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:35:39.103028    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:35:39.103042    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:35:39.117358    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:35:39.117368    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:35:39.131746    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:35:39.131757    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:35:39.150926    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:39.150937    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:39.189765    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:35:39.189777    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:35:39.218360    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:35:39.218373    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:35:39.232286    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:35:39.232297    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:35:39.252016    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:35:39.252027    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:35:39.263518    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:35:39.263529    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:35:39.275347    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:35:39.275358    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:35:39.293369    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:39.293383    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:39.297681    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:39.297687    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:39.332165    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:35:39.332175    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:39.343740    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:35:39.343751    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:35:39.355467    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:39.355477    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:39.888126    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:39.888346    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:39.902816    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:35:39.902914    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:39.914742    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:35:39.914823    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:39.925494    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:35:39.925585    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:39.935991    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:35:39.936072    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:39.946733    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:35:39.946814    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:39.957506    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:35:39.957583    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:39.967565    9127 logs.go:282] 0 containers: []
	W1211 15:35:39.967583    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:39.967652    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:39.977691    9127 logs.go:282] 0 containers: []
	W1211 15:35:39.977708    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:35:39.977713    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:35:39.977718    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:35:39.994564    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:39.994576    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:39.999373    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:35:39.999380    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:35:40.011117    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:35:40.011127    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:35:40.024199    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:35:40.024213    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:35:40.035598    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:40.035611    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:40.071789    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:35:40.071799    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:35:40.093446    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:35:40.093455    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:35:40.110645    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:35:40.110655    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:40.123585    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:35:40.123597    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:35:40.137810    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:35:40.137820    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:35:40.149558    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:35:40.149570    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:35:40.164007    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:35:40.164017    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:35:40.174863    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:35:40.174875    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:35:40.192707    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:40.192719    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:40.219554    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:40.219565    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:40.259227    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:35:40.259235    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:35:42.774568    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:41.882646    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:47.776942    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:47.777142    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:47.793016    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:35:47.793119    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:47.806188    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:35:47.806275    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:47.817099    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:35:47.817175    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:47.827312    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:35:47.827389    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:47.837471    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:35:47.837543    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:47.849644    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:35:47.849724    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:47.859933    9127 logs.go:282] 0 containers: []
	W1211 15:35:47.859944    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:47.860033    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:47.871223    9127 logs.go:282] 0 containers: []
	W1211 15:35:47.871233    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:35:47.871238    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:35:47.871243    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:35:47.890076    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:35:47.890087    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:35:47.901809    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:47.901822    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:47.943936    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:47.943948    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:47.978918    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:35:47.978931    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:35:47.993193    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:35:47.993208    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:35:48.006910    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:35:48.006924    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:35:48.027119    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:35:48.027130    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:35:48.039044    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:35:48.039056    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:35:48.050729    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:35:48.050743    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:35:48.063144    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:35:48.063159    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:35:48.074515    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:35:48.074529    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:35:48.091319    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:35:48.091334    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:48.103780    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:48.103795    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:48.107981    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:35:48.107991    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:35:48.123218    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:35:48.123229    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:35:48.135670    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:48.135684    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:46.883081    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:46.883295    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:46.901263    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:35:46.901377    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:46.915025    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:35:46.915110    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:46.929339    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:35:46.929419    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:46.939922    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:35:46.939995    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:46.951410    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:35:46.951472    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:46.962443    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:35:46.962520    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:46.972551    9116 logs.go:282] 0 containers: []
	W1211 15:35:46.972565    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:46.972618    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:46.984034    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:35:46.984056    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:35:46.984062    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:35:47.013385    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:35:47.013396    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:35:47.027230    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:35:47.027243    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:35:47.039249    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:47.039263    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:47.043496    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:35:47.043505    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:35:47.057058    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:35:47.057069    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:35:47.068808    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:47.068819    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:47.108327    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:35:47.108339    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:35:47.122971    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:35:47.122982    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:35:47.137581    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:35:47.137594    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:35:47.154320    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:47.154330    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:47.177710    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:47.177716    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:47.216775    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:35:47.216787    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:35:47.230838    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:35:47.230848    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:35:47.242065    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:35:47.242077    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:35:47.256457    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:35:47.256468    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:35:47.267524    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:35:47.267536    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:49.781424    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:50.663696    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:54.783580    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:54.783838    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:54.807313    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:35:54.807447    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:54.823302    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:35:54.823402    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:54.839523    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:35:54.839596    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:54.850174    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:35:54.850258    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:54.861494    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:35:54.861571    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:54.872426    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:35:54.872502    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:54.883353    9116 logs.go:282] 0 containers: []
	W1211 15:35:54.883364    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:54.883432    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:54.893924    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:35:54.893943    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:54.893948    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:54.931508    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:54.931517    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:54.935608    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:54.935615    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:54.970431    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:35:54.970443    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:35:54.985143    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:35:54.985159    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:35:54.996739    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:35:54.996751    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:35:55.011246    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:35:55.011257    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:35:55.022563    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:35:55.022575    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:35:55.050241    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:35:55.050252    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:35:55.061335    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:35:55.061347    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:35:55.075458    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:35:55.075469    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:35:55.087635    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:35:55.087646    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:35:55.101339    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:35:55.101350    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:35:55.112642    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:35:55.112653    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:55.126070    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:35:55.126082    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:35:55.140877    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:35:55.140888    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:35:55.158958    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:55.158968    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:55.665998    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:55.666238    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:55.690415    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:35:55.690550    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:55.710008    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:35:55.710104    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:55.740045    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:35:55.740132    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:55.751215    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:35:55.751301    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:55.761801    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:35:55.761882    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:55.772243    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:35:55.772324    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:55.782285    9127 logs.go:282] 0 containers: []
	W1211 15:35:55.782303    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:55.782370    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:55.792933    9127 logs.go:282] 0 containers: []
	W1211 15:35:55.792944    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:35:55.792950    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:35:55.792955    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:35:55.809033    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:35:55.809042    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:35:55.829603    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:35:55.829613    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:35:55.848614    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:35:55.848626    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:35:55.860006    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:55.860021    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:55.885319    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:55.885330    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:55.923945    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:35:55.923962    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:35:55.939072    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:35:55.939083    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:55.950678    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:55.950689    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:55.987363    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:35:55.987375    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:35:56.009385    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:35:56.009395    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:35:56.023944    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:35:56.023954    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:35:56.035918    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:35:56.035929    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:35:56.047579    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:35:56.047590    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:35:56.059505    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:56.059517    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:56.064032    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:35:56.064040    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:35:56.079947    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:35:56.079957    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:35:58.594081    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:57.684248    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:03.596776    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:03.597081    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:03.632337    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:03.632498    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:03.653003    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:03.653116    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:03.667971    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:03.668065    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:03.680091    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:03.680173    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:03.690285    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:03.690365    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:03.701004    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:03.701086    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:03.711667    9127 logs.go:282] 0 containers: []
	W1211 15:36:03.711679    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:03.711746    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:03.724411    9127 logs.go:282] 0 containers: []
	W1211 15:36:03.724422    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:03.724428    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:03.724434    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:03.736789    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:03.736803    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:03.748802    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:03.748813    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:03.760474    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:03.760486    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:03.776324    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:03.776339    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:03.791293    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:03.791304    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:03.806032    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:03.806042    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:03.822527    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:03.822538    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:03.827463    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:03.827470    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:03.840180    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:03.840191    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:03.858096    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:03.858110    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:02.686685    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:02.686970    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:02.717418    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:02.717532    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:02.731096    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:02.731182    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:02.744137    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:02.744242    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:02.754741    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:02.754836    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:02.765032    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:02.765117    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:02.777239    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:02.777324    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:02.787642    9116 logs.go:282] 0 containers: []
	W1211 15:36:02.787653    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:02.787713    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:02.798035    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:02.798056    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:02.798062    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:02.822150    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:02.822162    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:02.833648    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:02.833663    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:02.858998    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:02.859008    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:02.870544    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:02.870557    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:02.911066    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:02.911077    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:02.936495    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:02.936508    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:02.952361    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:02.952373    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:02.971373    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:02.971385    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:02.983511    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:02.983524    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:02.998174    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:02.998186    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:03.010525    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:03.010537    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:03.014793    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:03.014802    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:03.050556    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:03.050571    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:03.062386    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:03.062397    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:03.079900    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:03.079910    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:03.094083    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:03.094094    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:03.872779    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:03.872790    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:03.892771    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:03.892783    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:03.920462    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:03.920474    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:03.932531    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:03.932542    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:03.971840    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:03.971849    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:04.006694    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:04.006710    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:06.520200    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:05.609893    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:11.522688    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:11.523197    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:11.562969    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:11.563133    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:11.584252    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:11.584358    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:11.599405    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:11.599496    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:11.611794    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:11.611872    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:11.622490    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:11.622572    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:11.633749    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:11.633833    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:11.643645    9127 logs.go:282] 0 containers: []
	W1211 15:36:11.643657    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:11.643726    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:11.654646    9127 logs.go:282] 0 containers: []
	W1211 15:36:11.654660    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:11.654666    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:11.654672    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:11.659511    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:11.659520    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:11.674206    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:11.674217    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:11.686655    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:11.686665    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:11.703850    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:11.703860    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:11.730322    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:11.730330    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:11.742418    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:11.742429    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:11.754066    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:11.754079    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:11.794422    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:11.794436    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:11.832144    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:11.832159    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:11.846840    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:11.846852    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:11.864678    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:11.864689    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:11.877658    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:11.877667    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:11.893480    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:11.893491    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:11.910073    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:11.910085    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:11.921511    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:11.921522    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:11.932682    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:11.932693    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:10.610235    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:10.610664    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:10.642381    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:10.642540    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:10.662167    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:10.662283    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:10.676659    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:10.676754    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:10.689064    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:10.689141    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:10.699601    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:10.699684    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:10.710834    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:10.710919    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:10.720829    9116 logs.go:282] 0 containers: []
	W1211 15:36:10.720848    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:10.720920    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:10.731458    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:10.731475    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:10.731481    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:10.772228    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:10.772240    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:10.786602    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:10.786613    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:10.811249    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:10.811260    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:10.825286    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:10.825297    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:10.837485    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:10.837497    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:10.841565    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:10.841572    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:10.855996    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:10.856006    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:10.876653    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:10.876665    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:10.889353    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:10.889364    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:10.928548    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:10.928563    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:10.943182    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:10.943197    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:10.954627    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:10.954638    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:10.967743    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:10.967753    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:10.993292    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:10.993308    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:11.006803    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:11.006815    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:11.021667    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:11.021681    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:13.535590    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:14.445162    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:18.538184    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:18.538742    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:18.576424    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:18.576586    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:18.598632    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:18.598745    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:18.613715    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:18.613807    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:18.626221    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:18.626306    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:18.641288    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:18.641372    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:18.652122    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:18.652197    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:18.662568    9116 logs.go:282] 0 containers: []
	W1211 15:36:18.662580    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:18.662640    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:18.673382    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:18.673402    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:18.673408    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:18.699815    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:18.699829    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:18.714149    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:18.714164    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:18.752943    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:18.752951    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:18.767360    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:18.767372    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:18.792070    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:18.792082    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:18.848478    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:18.848493    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:18.862954    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:18.862966    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:18.874803    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:18.874816    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:18.886629    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:18.886642    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:18.899511    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:18.899522    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:18.904155    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:18.904164    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:18.918809    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:18.918820    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:18.930517    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:18.930528    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:18.944895    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:18.944905    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:18.968209    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:18.968219    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:18.983246    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:18.983262    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:19.446069    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:19.446293    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:19.471891    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:19.472011    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:19.486904    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:19.487000    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:19.499062    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:19.499136    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:19.509842    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:19.509929    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:19.520419    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:19.520505    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:19.531023    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:19.531111    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:19.541030    9127 logs.go:282] 0 containers: []
	W1211 15:36:19.541044    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:19.541113    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:19.551389    9127 logs.go:282] 0 containers: []
	W1211 15:36:19.551400    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:19.551406    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:19.551411    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:19.568299    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:19.568309    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:19.607201    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:19.607212    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:19.611637    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:19.611643    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:19.646668    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:19.646680    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:19.660702    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:19.660713    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:19.672222    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:19.672237    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:19.692723    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:19.692734    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:19.712182    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:19.712192    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:19.732367    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:19.732378    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:19.746579    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:19.746594    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:19.757663    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:19.757673    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:19.784095    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:19.784104    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:19.798042    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:19.798051    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:19.812428    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:19.812441    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:19.824265    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:19.824277    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:19.836215    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:19.836226    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:22.352592    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:21.498244    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:27.354792    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:27.355046    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:27.376044    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:27.376139    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:27.390539    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:27.390628    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:27.402943    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:27.403029    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:27.418897    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:27.418982    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:27.429644    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:27.429729    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:27.440075    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:27.440160    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:27.452504    9127 logs.go:282] 0 containers: []
	W1211 15:36:27.452516    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:27.452584    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:27.462439    9127 logs.go:282] 0 containers: []
	W1211 15:36:27.462451    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:27.462456    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:27.462461    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:27.479830    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:27.479839    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:27.490992    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:27.491006    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:27.502660    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:27.502670    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:27.537133    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:27.537145    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:27.552007    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:27.552017    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:27.563491    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:27.563502    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:27.601849    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:27.601857    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:27.616002    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:27.616011    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:27.627334    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:27.627343    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:27.631563    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:27.631569    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:27.643291    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:27.643303    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:27.661481    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:27.661491    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:27.673637    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:27.673647    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:27.699027    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:27.699034    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:27.716228    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:27.716238    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:27.731452    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:27.731461    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:26.500761    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:26.501064    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:26.527692    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:26.527830    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:26.542533    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:26.542632    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:26.554838    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:26.554925    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:26.566049    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:26.566131    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:26.576336    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:26.576411    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:26.588197    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:26.588280    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:26.598296    9116 logs.go:282] 0 containers: []
	W1211 15:36:26.598306    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:26.598370    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:26.609093    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:26.609110    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:26.609115    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:26.644659    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:26.644669    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:26.670134    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:26.670145    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:26.681320    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:26.681333    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:26.695689    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:26.695701    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:26.709451    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:26.709463    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:26.721084    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:26.721094    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:26.732677    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:26.732687    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:26.756026    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:26.756035    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:26.768021    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:26.768034    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:26.806914    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:26.806923    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:26.810913    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:26.810922    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:26.825112    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:26.825122    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:26.842080    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:26.842094    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:26.856745    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:26.856761    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:26.870699    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:26.870711    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:26.882803    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:26.882814    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:29.395073    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:30.244994    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:34.397261    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:34.397470    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:34.413384    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:34.413493    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:34.426202    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:34.426289    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:34.437245    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:34.437326    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:34.447706    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:34.447788    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:34.458470    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:34.458553    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:34.468902    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:34.468981    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:34.479419    9116 logs.go:282] 0 containers: []
	W1211 15:36:34.479431    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:34.479498    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:34.489838    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:34.489861    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:34.489867    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:34.507847    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:34.507861    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:34.512232    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:34.512238    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:34.539612    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:34.539624    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:34.553293    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:34.553306    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:34.565302    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:34.565313    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:34.582010    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:34.582020    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:34.593614    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:34.593623    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:34.605474    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:34.605487    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:34.640134    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:34.640145    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:34.654322    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:34.654333    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:34.665971    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:34.665982    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:34.680057    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:34.680068    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:34.694103    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:34.694113    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:34.733451    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:34.733460    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:34.748345    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:34.748354    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:34.760017    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:34.760027    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:35.247179    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:35.247379    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:35.260457    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:35.260551    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:35.271888    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:35.271964    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:35.287018    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:35.287095    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:35.298444    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:35.298522    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:35.309253    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:35.309339    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:35.320455    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:35.320540    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:35.330936    9127 logs.go:282] 0 containers: []
	W1211 15:36:35.330947    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:35.331012    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:35.340635    9127 logs.go:282] 0 containers: []
	W1211 15:36:35.340646    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:35.340652    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:35.340658    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:35.352147    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:35.352161    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:35.363536    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:35.363547    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:35.387576    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:35.387588    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:35.399693    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:35.399707    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:35.413696    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:35.413709    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:35.430101    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:35.430111    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:35.447375    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:35.447387    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:35.459065    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:35.459079    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:35.485561    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:35.485572    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:35.510394    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:35.510404    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:35.535054    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:35.535064    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:35.546945    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:35.546956    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:35.551209    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:35.551215    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:35.586616    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:35.586630    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:35.605504    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:35.605516    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:35.617391    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:35.617401    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:38.159583    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:37.286160    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:43.161778    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:43.161972    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:43.179959    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:43.180078    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:43.194097    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:43.194190    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:43.206880    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:43.206968    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:43.217662    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:43.217744    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:43.228233    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:43.228315    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:43.239247    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:43.239326    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:43.249580    9127 logs.go:282] 0 containers: []
	W1211 15:36:43.249591    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:43.249655    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:43.260363    9127 logs.go:282] 0 containers: []
	W1211 15:36:43.260374    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:43.260380    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:43.260385    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:43.282332    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:43.282342    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:43.307781    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:43.307793    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:43.319772    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:43.319782    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:43.337690    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:43.337699    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:43.349632    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:43.349642    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:43.361331    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:43.361341    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:43.372217    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:43.372233    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:43.384605    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:43.384616    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:43.411285    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:43.411294    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:43.450671    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:43.450681    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:43.463350    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:43.463362    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:43.475664    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:43.475673    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:43.515743    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:43.515752    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:43.520046    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:43.520055    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:43.531931    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:43.531942    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:43.545784    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:43.545793    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:42.287553    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:42.287739    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:42.301184    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:42.301272    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:42.312349    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:42.312432    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:42.322865    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:42.322948    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:42.333545    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:42.333630    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:42.344208    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:42.344282    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:42.354242    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:42.354317    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:42.364490    9116 logs.go:282] 0 containers: []
	W1211 15:36:42.364503    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:42.364577    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:42.375222    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:42.375238    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:42.375244    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:42.411641    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:42.411649    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:42.415674    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:42.415681    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:42.427758    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:42.427769    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:42.439917    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:42.439930    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:42.474694    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:42.474706    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:42.488913    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:42.488923    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:42.500242    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:42.500253    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:42.519399    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:42.519409    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:42.534138    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:42.534149    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:42.545751    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:42.545761    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:42.559937    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:42.559947    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:42.573801    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:42.573810    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:42.585561    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:42.585572    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:42.610338    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:42.610348    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:42.621839    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:42.621850    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:42.635785    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:42.635795    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:45.161124    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:46.061996    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:50.163448    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:50.163693    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:50.180004    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:50.180105    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:50.192921    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:50.193002    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:50.207335    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:50.207423    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:50.217943    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:50.218027    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:50.228765    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:50.228842    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:50.239031    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:50.239114    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:50.248869    9116 logs.go:282] 0 containers: []
	W1211 15:36:50.248881    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:50.248953    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:50.260681    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:50.260699    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:50.260705    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:50.275998    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:50.276007    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:51.064075    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:51.064225    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:51.078198    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:51.078293    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:51.089553    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:51.089637    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:51.101781    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:51.101861    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:51.112355    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:51.112431    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:51.122909    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:51.122978    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:51.133572    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:51.133666    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:51.148835    9127 logs.go:282] 0 containers: []
	W1211 15:36:51.148848    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:51.148931    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:51.161677    9127 logs.go:282] 0 containers: []
	W1211 15:36:51.161690    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:51.161695    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:51.161701    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:51.166354    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:51.166359    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:51.200491    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:51.200505    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:51.214856    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:51.214867    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:51.227605    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:51.227618    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:51.242845    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:51.242857    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:51.255928    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:51.255940    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:51.270714    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:51.270725    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:51.281996    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:51.282009    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:51.307778    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:51.307790    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:51.345995    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:51.346011    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:51.365464    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:51.365479    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:51.377972    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:51.377983    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:51.392123    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:51.392134    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:51.410086    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:51.410099    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:36:51.433319    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:51.433332    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:51.445281    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:51.445294    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:50.300449    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:50.300460    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:50.312735    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:50.312746    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:50.327155    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:50.327165    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:50.338584    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:50.338597    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:50.362575    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:50.362586    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:50.380384    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:50.380397    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:50.391677    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:50.391689    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:50.426755    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:50.426768    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:50.440569    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:50.440581    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:50.451756    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:50.451766    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:50.463274    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:50.463285    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:50.478445    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:50.478456    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:50.482565    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:50.482574    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:50.497744    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:50.497761    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:50.534268    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:50.534276    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:53.047900    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:53.960233    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:58.050506    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:58.050805    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:58.075695    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:58.075829    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:58.095092    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:58.095191    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:58.111717    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:58.111806    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:58.122124    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:58.122199    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:58.132517    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:58.132592    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:58.152329    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:58.152409    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:58.163308    9116 logs.go:282] 0 containers: []
	W1211 15:36:58.163320    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:58.163386    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:58.174249    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:58.174270    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:58.174276    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:58.189634    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:58.189645    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:58.209058    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:58.209073    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:58.223470    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:58.223482    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:58.246518    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:58.246530    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:58.250533    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:58.250540    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:58.285159    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:58.285172    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:58.299680    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:58.299691    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:58.310849    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:58.310860    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:58.322574    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:58.322584    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:58.339796    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:58.339809    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:58.351254    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:58.351269    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:58.390490    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:58.390498    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:58.407486    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:58.407499    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:58.432624    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:58.432635    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:58.444094    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:58.444106    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:58.455957    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:58.455969    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:58.962457    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:58.962643    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:58.981449    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:36:58.981541    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:58.993540    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:36:58.993625    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:59.004336    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:36:59.004414    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:59.014831    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:36:59.014902    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:59.024933    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:36:59.025012    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:59.035871    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:36:59.035947    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:59.045857    9127 logs.go:282] 0 containers: []
	W1211 15:36:59.045870    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:59.045939    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:59.055941    9127 logs.go:282] 0 containers: []
	W1211 15:36:59.055951    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:36:59.055957    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:36:59.055962    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:36:59.070303    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:36:59.070314    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:36:59.088462    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:36:59.088475    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:36:59.100555    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:59.100565    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:59.125877    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:59.125893    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:59.165317    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:36:59.165333    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:36:59.177706    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:36:59.177717    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:36:59.191974    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:36:59.191990    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:36:59.203387    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:36:59.203400    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:36:59.219346    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:36:59.219357    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:36:59.231523    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:36:59.231538    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:36:59.244077    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:36:59.244089    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:59.256525    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:59.256538    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:59.261502    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:59.261509    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:59.298072    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:36:59.298082    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:36:59.317637    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:36:59.317651    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:36:59.329098    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:36:59.329112    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:01.847791    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:00.970187    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:06.850091    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:06.850376    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:06.877783    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:06.877920    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:06.894739    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:06.894835    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:06.914186    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:06.914265    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:06.925263    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:06.925347    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:06.936103    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:06.936175    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:06.946564    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:06.946649    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:06.956030    9127 logs.go:282] 0 containers: []
	W1211 15:37:06.956043    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:06.956107    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:06.966835    9127 logs.go:282] 0 containers: []
	W1211 15:37:06.966850    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:06.966856    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:06.966862    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:06.982299    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:06.982311    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:06.986650    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:06.986659    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:07.022108    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:07.022119    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:07.039493    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:07.039505    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:07.050800    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:07.050811    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:07.062852    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:07.062866    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:07.103438    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:07.103448    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:07.114695    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:07.114708    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:07.131495    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:07.131508    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:07.142992    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:07.143005    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:07.154486    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:07.154499    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:07.170379    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:07.170393    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:07.184238    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:07.184255    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:07.198207    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:07.198223    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:07.220473    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:07.220484    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:07.244868    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:07.244877    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:05.972407    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:05.972593    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:05.984615    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:05.984698    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:05.995230    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:05.995313    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:06.005994    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:06.006072    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:06.016423    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:06.016503    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:06.031738    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:06.031812    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:06.042057    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:06.042137    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:06.052812    9116 logs.go:282] 0 containers: []
	W1211 15:37:06.052830    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:06.052894    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:06.063423    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:06.063442    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:06.063447    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:06.076993    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:06.077004    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:06.091634    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:06.091647    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:06.102933    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:06.102943    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:06.117963    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:06.117978    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:06.131782    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:06.131793    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:06.150844    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:06.150855    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:06.164556    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:06.164568    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:06.184599    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:06.184618    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:06.225460    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:06.225481    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:06.229953    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:06.229960    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:06.266386    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:06.266399    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:06.291403    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:06.291414    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:06.303408    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:06.303418    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:06.315171    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:06.315181    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:06.335910    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:06.335921    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:06.348221    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:06.348234    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:08.875345    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:09.759057    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:13.877896    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:13.878191    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:13.912333    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:13.912460    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:13.933772    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:13.933867    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:13.946113    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:13.946198    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:13.956370    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:13.956453    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:13.967112    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:13.967190    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:13.977745    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:13.977826    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:13.988142    9116 logs.go:282] 0 containers: []
	W1211 15:37:13.988159    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:13.988225    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:13.998275    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:13.998293    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:13.998299    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:14.009664    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:14.009680    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:14.023178    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:14.023189    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:14.040297    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:14.040307    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:14.053841    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:14.053851    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:14.068759    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:14.068771    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:14.083768    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:14.083778    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:14.095314    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:14.095325    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:14.106536    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:14.106547    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:14.143470    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:14.143501    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:14.148358    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:14.148368    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:14.184373    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:14.184383    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:14.196013    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:14.196027    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:14.214882    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:14.214893    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:14.227427    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:14.227438    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:14.255395    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:14.255407    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:14.267211    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:14.267220    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:14.761412    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:14.761746    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:14.788165    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:14.788317    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:14.806141    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:14.806252    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:14.819769    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:14.819856    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:14.832092    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:14.832159    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:14.842677    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:14.842755    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:14.854099    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:14.854176    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:14.864572    9127 logs.go:282] 0 containers: []
	W1211 15:37:14.864583    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:14.864649    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:14.876299    9127 logs.go:282] 0 containers: []
	W1211 15:37:14.876310    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:14.876316    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:14.876321    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:14.888008    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:14.888019    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:14.900680    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:14.900693    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:14.914479    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:14.914493    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:14.934307    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:14.934322    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:14.945695    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:14.945707    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:14.950047    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:14.950054    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:14.985957    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:14.985966    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:14.998993    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:14.999007    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:15.013589    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:15.013603    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:15.027807    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:15.027821    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:15.042460    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:15.042470    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:15.054043    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:15.054057    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:15.071380    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:15.071391    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:15.112098    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:15.112107    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:15.124962    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:15.124976    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:15.149936    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:15.149945    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:17.663022    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:16.792846    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:22.665527    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:22.665747    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:22.686057    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:22.686159    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:22.705707    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:22.705799    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:22.717062    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:22.717146    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:22.728004    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:22.728109    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:22.738446    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:22.738531    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:22.749006    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:22.749102    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:22.758887    9127 logs.go:282] 0 containers: []
	W1211 15:37:22.758904    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:22.758965    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:22.768977    9127 logs.go:282] 0 containers: []
	W1211 15:37:22.768987    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:22.768993    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:22.768999    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:22.780892    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:22.780903    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:22.820981    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:22.820989    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:22.834260    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:22.834274    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:22.852073    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:22.852084    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:22.863186    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:22.863199    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:22.905041    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:22.905057    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:22.917731    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:22.917745    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:22.930172    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:22.930183    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:22.945455    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:22.945469    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:22.957525    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:22.957540    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:22.975465    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:22.975482    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:22.993084    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:22.993096    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:23.017491    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:23.017498    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:23.021822    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:23.021828    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:23.035018    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:23.035030    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:23.051691    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:23.051701    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:21.795359    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:21.795685    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:21.826444    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:21.826599    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:21.849859    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:21.849970    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:21.863317    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:21.863405    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:21.875000    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:21.875090    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:21.885844    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:21.885926    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:21.896571    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:21.896656    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:21.910121    9116 logs.go:282] 0 containers: []
	W1211 15:37:21.910132    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:21.910205    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:21.921049    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:21.921069    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:21.921075    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:21.946289    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:21.946301    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:21.961918    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:21.961927    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:21.976641    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:21.976652    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:21.988336    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:21.988347    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:21.992541    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:21.992550    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:22.005269    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:22.005281    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:22.042445    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:22.042465    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:22.079249    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:22.079259    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:22.095066    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:22.095077    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:22.112939    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:22.112952    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:22.137753    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:22.137760    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:22.151880    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:22.151890    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:22.172944    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:22.172954    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:22.184819    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:22.184830    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:22.202940    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:22.202952    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:22.217375    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:22.217388    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:24.729198    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:25.565415    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:29.731702    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:29.731965    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:29.755364    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:29.755503    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:29.771159    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:29.771264    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:29.787669    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:29.787748    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:29.798105    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:29.798183    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:29.808490    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:29.808570    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:29.825735    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:29.825813    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:29.835680    9116 logs.go:282] 0 containers: []
	W1211 15:37:29.835692    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:29.835755    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:29.849112    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:29.849130    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:29.849136    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:29.860790    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:29.860802    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:29.872792    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:29.872805    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:29.887579    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:29.887589    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:29.906047    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:29.906059    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:29.921663    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:29.921673    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:29.933683    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:29.933695    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:29.972608    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:29.972620    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:29.987520    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:29.987536    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:29.999137    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:29.999150    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:30.013139    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:30.013150    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:30.035236    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:30.035245    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:30.039828    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:30.039837    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:30.053184    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:30.053195    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:30.077504    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:30.077515    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:30.113093    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:30.113104    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:30.126921    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:30.126931    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:30.567704    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:30.567902    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:30.587233    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:30.587351    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:30.602649    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:30.602728    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:30.614565    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:30.614645    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:30.625346    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:30.625436    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:30.635722    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:30.635804    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:30.652133    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:30.652206    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:30.665051    9127 logs.go:282] 0 containers: []
	W1211 15:37:30.665064    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:30.665138    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:30.675035    9127 logs.go:282] 0 containers: []
	W1211 15:37:30.675050    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:30.675060    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:30.675065    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:30.689050    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:30.689060    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:30.702395    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:30.702407    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:30.719640    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:30.719651    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:30.731599    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:30.731610    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:30.769497    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:30.769508    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:30.780906    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:30.780917    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:30.797486    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:30.797496    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:30.812090    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:30.812099    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:30.823330    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:30.823345    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:30.835547    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:30.835556    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:30.840326    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:30.840335    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:30.852735    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:30.852745    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:30.864012    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:30.864022    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:30.879108    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:30.879120    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:30.904657    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:30.904666    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:30.944450    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:30.944463    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:33.461564    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:32.640545    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:38.463704    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:38.463849    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:38.476038    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:38.476108    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:38.486053    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:38.486130    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:38.500789    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:38.500867    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:38.511466    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:38.511551    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:38.525322    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:38.525402    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:38.535738    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:38.535820    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:38.545418    9127 logs.go:282] 0 containers: []
	W1211 15:37:38.545430    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:38.545487    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:38.556169    9127 logs.go:282] 0 containers: []
	W1211 15:37:38.556180    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:38.556186    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:38.556191    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:38.568620    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:38.568631    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:38.579537    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:38.579550    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:38.614280    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:38.614290    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:38.626886    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:38.626898    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:38.643354    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:38.643364    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:38.655569    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:38.655580    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:38.672338    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:38.672350    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:38.686686    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:38.686698    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:38.691886    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:38.691892    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:38.703316    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:38.703329    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:38.714889    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:38.714901    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:38.739433    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:38.739443    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:38.779347    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:38.779358    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:38.797097    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:38.797111    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:38.808063    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:38.808075    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:38.819845    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:38.819860    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:37.641501    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:37.641705    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:37.659534    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:37.659632    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:37.672691    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:37.672785    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:37.683917    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:37.683999    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:37.694005    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:37.694083    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:37.704462    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:37.704543    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:37.717892    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:37.717972    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:37.730391    9116 logs.go:282] 0 containers: []
	W1211 15:37:37.730404    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:37.730473    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:37.740991    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:37.741008    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:37.741013    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:37.764734    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:37.764741    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:37.802243    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:37.802252    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:37.816603    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:37.816614    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:37.830206    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:37.830216    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:37.841427    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:37.841436    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:37.852724    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:37.852733    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:37.856809    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:37.856819    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:37.891327    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:37.891340    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:37.912597    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:37.912608    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:37.930327    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:37.930337    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:37.942362    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:37.942374    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:37.957268    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:37.957279    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:37.968457    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:37.968468    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:37.983047    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:37.983059    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:37.994867    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:37.994878    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:38.020235    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:38.020246    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:41.336592    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:40.533747    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:46.338760    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:46.338935    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:46.349766    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:46.349857    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:46.361221    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:46.361298    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:46.372511    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:46.372594    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:46.383394    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:46.383478    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:46.395002    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:46.395078    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:46.405854    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:46.405933    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:46.416013    9127 logs.go:282] 0 containers: []
	W1211 15:37:46.416024    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:46.416093    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:46.428505    9127 logs.go:282] 0 containers: []
	W1211 15:37:46.428519    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:46.428525    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:46.428531    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:46.439786    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:46.439798    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:46.450864    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:46.450874    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:46.467257    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:46.467266    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:46.479028    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:46.479042    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:46.501708    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:46.501716    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:46.514000    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:46.514011    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:46.527963    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:46.527974    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:46.541896    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:46.541905    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:46.553180    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:46.553189    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:46.566069    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:46.566079    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:46.578537    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:46.578548    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:46.592656    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:46.592666    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:46.627909    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:46.627920    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:46.640436    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:46.640447    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:46.658659    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:46.658668    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:46.697780    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:46.697787    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:45.536073    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:45.536697    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:45.575385    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:45.575553    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:45.596243    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:45.596360    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:45.611536    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:45.611633    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:45.629655    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:45.629758    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:45.640557    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:45.640637    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:45.651753    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:45.651836    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:45.665520    9116 logs.go:282] 0 containers: []
	W1211 15:37:45.665533    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:45.665598    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:45.676218    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:45.676237    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:45.676242    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:45.690142    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:45.690153    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:45.701642    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:45.701651    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:45.705844    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:45.705851    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:45.730208    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:45.730219    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:45.743161    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:45.743171    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:45.757024    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:45.757035    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:45.768457    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:45.768469    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:45.806666    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:45.806676    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:45.820775    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:45.820786    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:45.835392    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:45.835402    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:45.847494    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:45.847507    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:45.865245    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:45.865256    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:45.879962    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:45.879973    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:45.918947    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:45.918960    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:45.942145    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:45.942154    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:45.954002    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:45.954012    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:48.466390    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:49.204439    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:53.466993    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:53.467466    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:53.496710    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:53.496849    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:53.520281    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:53.520376    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:53.533287    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:53.533362    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:53.544258    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:53.544337    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:53.554508    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:53.554585    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:53.564966    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:53.565036    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:53.575274    9116 logs.go:282] 0 containers: []
	W1211 15:37:53.575290    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:53.575353    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:53.585855    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:53.585872    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:53.585878    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:53.599934    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:53.599947    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:53.611746    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:53.611756    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:53.633660    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:53.633667    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:53.638142    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:53.638148    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:53.675013    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:53.675023    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:53.690167    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:53.690180    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:53.705867    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:53.705882    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:53.718170    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:53.718181    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:53.732601    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:53.732616    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:53.757752    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:53.757762    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:53.769444    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:53.769453    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:53.780962    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:53.780971    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:53.820894    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:53.820903    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:53.839925    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:53.839938    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:53.854402    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:53.854414    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:53.869820    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:53.869831    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:54.206658    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:54.206863    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:54.224363    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:37:54.224463    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:54.236803    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:37:54.236891    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:54.247691    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:37:54.247776    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:54.258347    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:37:54.258424    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:54.272802    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:37:54.272883    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:54.283393    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:37:54.283481    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:54.293983    9127 logs.go:282] 0 containers: []
	W1211 15:37:54.293995    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:54.294065    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:54.304678    9127 logs.go:282] 0 containers: []
	W1211 15:37:54.304689    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:37:54.304694    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:37:54.304699    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:37:54.316465    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:54.316475    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:54.340385    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:54.340393    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:54.380320    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:54.380329    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:54.415605    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:37:54.415618    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:37:54.426807    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:37:54.426819    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:37:54.473109    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:37:54.473125    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:37:54.488007    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:37:54.488021    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:37:54.501595    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:37:54.501608    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:37:54.515220    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:37:54.515231    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:37:54.533645    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:37:54.533654    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:54.546141    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:37:54.546152    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:37:54.560629    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:37:54.560640    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:37:54.572374    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:37:54.572389    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:37:54.588915    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:37:54.588926    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:37:54.599991    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:54.600006    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:54.604467    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:37:54.604476    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:37:57.123513    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:56.383992    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:02.125792    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:02.125927    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:38:02.138734    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:38:02.138830    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:38:02.149906    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:38:02.149981    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:38:02.160750    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:38:02.160834    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:38:02.171387    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:38:02.171472    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:38:02.182078    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:38:02.182164    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:38:02.193142    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:38:02.193222    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:38:02.203389    9127 logs.go:282] 0 containers: []
	W1211 15:38:02.203401    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:38:02.203466    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:38:02.213504    9127 logs.go:282] 0 containers: []
	W1211 15:38:02.213517    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:38:02.213522    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:38:02.213528    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:38:02.226166    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:38:02.226177    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:38:02.249249    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:38:02.249257    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:38:02.287514    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:38:02.287523    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:38:02.300270    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:38:02.300283    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:38:02.312008    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:38:02.312020    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:38:02.323469    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:38:02.323481    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:38:02.339993    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:38:02.340003    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:38:02.357412    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:38:02.357424    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:38:02.370344    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:38:02.370355    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:38:02.406071    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:38:02.406092    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:38:02.424676    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:38:02.424686    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:38:02.438538    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:38:02.438551    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:38:02.458084    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:38:02.458095    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:38:02.469948    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:38:02.469957    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:38:02.481487    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:38:02.481500    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:38:02.486533    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:38:02.486541    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:38:01.386123    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:01.386375    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:38:01.403147    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:38:01.403248    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:38:01.416206    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:38:01.416289    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:38:01.426923    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:38:01.427007    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:38:01.437654    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:38:01.437752    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:38:01.447709    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:38:01.447788    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:38:01.458313    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:38:01.458404    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:38:01.469108    9116 logs.go:282] 0 containers: []
	W1211 15:38:01.469120    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:38:01.469186    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:38:01.479610    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:38:01.479632    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:38:01.479638    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:38:01.483769    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:38:01.483779    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:38:01.521831    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:38:01.521842    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:38:01.538646    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:38:01.538657    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:38:01.550344    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:38:01.550358    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:38:01.562033    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:38:01.562043    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:38:01.588033    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:38:01.588044    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:38:01.602122    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:38:01.602135    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:38:01.616532    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:38:01.616542    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:38:01.650420    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:38:01.650437    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:38:01.668337    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:38:01.668349    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:38:01.687068    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:38:01.687080    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:38:01.710417    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:38:01.710425    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:38:01.751033    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:38:01.751042    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:38:01.768575    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:38:01.768586    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:38:01.780003    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:38:01.780015    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:38:01.796799    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:38:01.796814    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:38:04.313314    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:05.003098    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:09.315477    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:09.315672    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:38:09.327314    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:38:09.327405    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:38:09.338436    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:38:09.338527    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:38:09.348799    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:38:09.348883    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:38:09.359661    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:38:09.359750    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:38:09.370406    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:38:09.370487    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:38:09.380763    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:38:09.380846    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:38:09.391629    9116 logs.go:282] 0 containers: []
	W1211 15:38:09.391643    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:38:09.391710    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:38:09.407115    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:38:09.407134    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:38:09.407140    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:38:09.419032    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:38:09.419047    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:38:09.423142    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:38:09.423150    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:38:09.439366    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:38:09.439378    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:38:09.453255    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:38:09.453267    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:38:09.476910    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:38:09.476921    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:38:09.489300    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:38:09.489311    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:38:09.503816    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:38:09.503831    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:38:09.518390    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:38:09.518403    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:38:09.536072    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:38:09.536084    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:38:09.568606    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:38:09.568618    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:38:09.593829    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:38:09.593845    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:38:09.605540    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:38:09.605555    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:38:09.617288    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:38:09.617296    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:38:09.651795    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:38:09.651811    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:38:09.676613    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:38:09.676625    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:38:09.714612    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:38:09.714621    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:38:10.005301    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:10.005448    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:38:10.018540    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:38:10.018631    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:38:10.029127    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:38:10.029205    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:38:10.039844    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:38:10.039923    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:38:10.049981    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:38:10.050057    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:38:10.060298    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:38:10.060364    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:38:10.070704    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:38:10.070776    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:38:10.080672    9127 logs.go:282] 0 containers: []
	W1211 15:38:10.080685    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:38:10.080754    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:38:10.092381    9127 logs.go:282] 0 containers: []
	W1211 15:38:10.092395    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:38:10.092400    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:38:10.092406    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:38:10.107306    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:38:10.107317    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:38:10.118778    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:38:10.118790    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:38:10.130794    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:38:10.130806    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:38:10.141994    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:38:10.142002    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:38:10.165850    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:38:10.165862    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:38:10.202759    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:38:10.202775    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:38:10.217315    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:38:10.217327    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:38:10.231195    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:38:10.231207    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:38:10.272812    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:38:10.272821    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:38:10.286673    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:38:10.286685    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:38:10.304785    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:38:10.304798    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:38:10.317311    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:38:10.317325    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:38:10.333425    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:38:10.333436    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:38:10.345005    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:38:10.345015    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:38:10.356497    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:38:10.356506    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:38:10.360916    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:38:10.360923    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:38:12.874273    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:12.227877    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:17.230224    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:17.230310    9116 kubeadm.go:597] duration metric: took 4m3.835391125s to restartPrimaryControlPlane
	W1211 15:38:17.230376    9116 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1211 15:38:17.230400    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1211 15:38:18.315171    9116 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.084791667s)
	I1211 15:38:18.315246    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 15:38:18.320408    9116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 15:38:18.323291    9116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 15:38:18.325999    9116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 15:38:18.326011    9116 kubeadm.go:157] found existing configuration files:
	
	I1211 15:38:18.326045    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/admin.conf
	I1211 15:38:18.328578    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 15:38:18.328611    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 15:38:18.331935    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/kubelet.conf
	I1211 15:38:18.334728    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 15:38:18.334753    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 15:38:18.337522    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/controller-manager.conf
	I1211 15:38:18.340318    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 15:38:18.340342    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 15:38:18.343580    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/scheduler.conf
	I1211 15:38:18.346094    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 15:38:18.346123    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 15:38:18.348911    9116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 15:38:18.367790    9116 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1211 15:38:18.367827    9116 kubeadm.go:310] [preflight] Running pre-flight checks
	I1211 15:38:18.415909    9116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 15:38:18.416020    9116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 15:38:18.416095    9116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1211 15:38:18.467008    9116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 15:38:17.876816    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:17.876954    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:38:17.889031    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:38:17.889113    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:38:17.900986    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:38:17.901068    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:38:17.912700    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:38:17.912790    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:38:17.925407    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:38:17.925490    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:38:17.937156    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:38:17.937241    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:38:17.948932    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:38:17.949029    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:38:17.960750    9127 logs.go:282] 0 containers: []
	W1211 15:38:17.960762    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:38:17.960834    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:38:17.972171    9127 logs.go:282] 0 containers: []
	W1211 15:38:17.972183    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:38:17.972189    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:38:17.972195    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:38:17.977354    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:38:17.977367    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:38:17.992191    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:38:17.992201    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:38:18.004885    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:38:18.004896    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:38:18.016731    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:38:18.016744    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:38:18.030007    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:38:18.030020    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:38:18.042690    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:38:18.042722    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:38:18.061668    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:38:18.061683    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:38:18.076237    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:38:18.076250    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:38:18.112925    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:38:18.112939    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:38:18.125527    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:38:18.125542    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:38:18.137490    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:38:18.137506    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:38:18.179115    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:38:18.179127    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:38:18.193782    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:38:18.193796    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:38:18.206455    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:38:18.206468    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:38:18.226124    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:38:18.226137    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:38:18.249627    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:38:18.249648    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:38:18.470996    9116 out.go:235]   - Generating certificates and keys ...
	I1211 15:38:18.471103    9116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1211 15:38:18.471259    9116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1211 15:38:18.471362    9116 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 15:38:18.471400    9116 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1211 15:38:18.471442    9116 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 15:38:18.471472    9116 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1211 15:38:18.471507    9116 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1211 15:38:18.471554    9116 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1211 15:38:18.471600    9116 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 15:38:18.471642    9116 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 15:38:18.471681    9116 kubeadm.go:310] [certs] Using the existing "sa" key
	I1211 15:38:18.471715    9116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 15:38:18.517256    9116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 15:38:18.606874    9116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 15:38:18.824285    9116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 15:38:18.885000    9116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 15:38:18.912618    9116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 15:38:18.912991    9116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 15:38:18.913092    9116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1211 15:38:18.985465    9116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 15:38:18.989373    9116 out.go:235]   - Booting up control plane ...
	I1211 15:38:18.989418    9116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 15:38:18.989457    9116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 15:38:18.989508    9116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 15:38:18.989558    9116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 15:38:18.989643    9116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1211 15:38:20.767088    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:23.487682    9116 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501003 seconds
	I1211 15:38:23.487745    9116 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 15:38:23.491460    9116 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 15:38:24.002675    9116 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 15:38:24.002937    9116 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-684000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 15:38:24.506696    9116 kubeadm.go:310] [bootstrap-token] Using token: dsob3n.pl1rcy9wqctvzov5
	I1211 15:38:24.513125    9116 out.go:235]   - Configuring RBAC rules ...
	I1211 15:38:24.513186    9116 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 15:38:24.513241    9116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 15:38:24.515170    9116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 15:38:24.519567    9116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 15:38:24.520471    9116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 15:38:24.521576    9116 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 15:38:24.526339    9116 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 15:38:24.682083    9116 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1211 15:38:24.910158    9116 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1211 15:38:24.910585    9116 kubeadm.go:310] 
	I1211 15:38:24.910613    9116 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1211 15:38:24.910617    9116 kubeadm.go:310] 
	I1211 15:38:24.910650    9116 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1211 15:38:24.910654    9116 kubeadm.go:310] 
	I1211 15:38:24.910664    9116 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1211 15:38:24.910692    9116 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 15:38:24.910722    9116 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 15:38:24.910728    9116 kubeadm.go:310] 
	I1211 15:38:24.910764    9116 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1211 15:38:24.910767    9116 kubeadm.go:310] 
	I1211 15:38:24.910796    9116 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 15:38:24.910799    9116 kubeadm.go:310] 
	I1211 15:38:24.910835    9116 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1211 15:38:24.910873    9116 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 15:38:24.910914    9116 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 15:38:24.910920    9116 kubeadm.go:310] 
	I1211 15:38:24.910959    9116 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 15:38:24.910995    9116 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1211 15:38:24.910998    9116 kubeadm.go:310] 
	I1211 15:38:24.911053    9116 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dsob3n.pl1rcy9wqctvzov5 \
	I1211 15:38:24.911106    9116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d49e2bb776362b8f3de097afdeb999a6cd72c9e172f75d4b314d4105a8117ae2 \
	I1211 15:38:24.911117    9116 kubeadm.go:310] 	--control-plane 
	I1211 15:38:24.911119    9116 kubeadm.go:310] 
	I1211 15:38:24.911167    9116 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1211 15:38:24.911171    9116 kubeadm.go:310] 
	I1211 15:38:24.911244    9116 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dsob3n.pl1rcy9wqctvzov5 \
	I1211 15:38:24.911295    9116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d49e2bb776362b8f3de097afdeb999a6cd72c9e172f75d4b314d4105a8117ae2 
	I1211 15:38:24.911487    9116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 15:38:24.911511    9116 cni.go:84] Creating CNI manager for ""
	I1211 15:38:24.911519    9116 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:38:24.915004    9116 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1211 15:38:24.922189    9116 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1211 15:38:24.925231    9116 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1211 15:38:24.930274    9116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 15:38:24.930334    9116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 15:38:24.930335    9116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-684000 minikube.k8s.io/updated_at=2024_12_11T15_38_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=stopped-upgrade-684000 minikube.k8s.io/primary=true
	I1211 15:38:24.973780    9116 ops.go:34] apiserver oom_adj: -16
	I1211 15:38:24.973777    9116 kubeadm.go:1113] duration metric: took 43.496542ms to wait for elevateKubeSystemPrivileges
	I1211 15:38:24.973818    9116 kubeadm.go:394] duration metric: took 4m11.592381375s to StartCluster
	I1211 15:38:24.973829    9116 settings.go:142] acquiring lock: {Name:mk7be6692255448ff6d4be3295ef81ca16b62a5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:38:24.974010    9116 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:38:24.974396    9116 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/kubeconfig: {Name:mkbb4a262cd8684046b6244fd6ca1d80f2c17ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:38:24.974718    9116 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:38:24.974809    9116 config.go:182] Loaded profile config "stopped-upgrade-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1211 15:38:24.974969    9116 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 15:38:24.975004    9116 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-684000"
	I1211 15:38:24.975013    9116 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-684000"
	W1211 15:38:24.975019    9116 addons.go:243] addon storage-provisioner should already be in state true
	I1211 15:38:24.975029    9116 host.go:66] Checking if "stopped-upgrade-684000" exists ...
	I1211 15:38:24.975042    9116 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-684000"
	I1211 15:38:24.975101    9116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-684000"
	I1211 15:38:24.975483    9116 retry.go:31] will retry after 1.423033113s: connect: dial unix /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/monitor: connect: connection refused
	I1211 15:38:24.979227    9116 out.go:177] * Verifying Kubernetes components...
	I1211 15:38:24.987157    9116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:38:24.991164    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:38:24.995204    9116 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 15:38:24.995212    9116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 15:38:24.995220    9116 sshutil.go:53] new ssh client: &{IP:localhost Port:61382 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/id_rsa Username:docker}
	I1211 15:38:25.060991    9116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 15:38:25.065840    9116 api_server.go:52] waiting for apiserver process to appear ...
	I1211 15:38:25.065885    9116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:38:25.069675    9116 api_server.go:72] duration metric: took 94.948167ms to wait for apiserver process to appear ...
	I1211 15:38:25.069684    9116 api_server.go:88] waiting for apiserver healthz status ...
	I1211 15:38:25.069692    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:25.091338    9116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 15:38:25.769215    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:25.769360    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:38:25.784601    9127 logs.go:282] 2 containers: [d5c98d25fb5c 54bb8dab6d62]
	I1211 15:38:25.784686    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:38:25.795521    9127 logs.go:282] 2 containers: [02d318e6eaa7 6be8bf310db2]
	I1211 15:38:25.795607    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:38:25.806598    9127 logs.go:282] 2 containers: [c4d4e2cbd6f6 a954fb185965]
	I1211 15:38:25.806685    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:38:25.817537    9127 logs.go:282] 2 containers: [21b0e2c71d55 d34888fb8fe2]
	I1211 15:38:25.817622    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:38:25.828376    9127 logs.go:282] 2 containers: [e7a7b85c462e 1140a38c8ff2]
	I1211 15:38:25.828459    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:38:25.839024    9127 logs.go:282] 2 containers: [f22aba41f66e 14d75f9b9c9d]
	I1211 15:38:25.839114    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:38:25.853349    9127 logs.go:282] 0 containers: []
	W1211 15:38:25.853364    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:38:25.853435    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:38:25.863955    9127 logs.go:282] 0 containers: []
	W1211 15:38:25.863969    9127 logs.go:284] No container was found matching "storage-provisioner"
	I1211 15:38:25.863975    9127 logs.go:123] Gathering logs for kube-proxy [e7a7b85c462e] ...
	I1211 15:38:25.863982    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7a7b85c462e"
	I1211 15:38:25.877795    9127 logs.go:123] Gathering logs for kube-controller-manager [f22aba41f66e] ...
	I1211 15:38:25.877806    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f22aba41f66e"
	I1211 15:38:25.895097    9127 logs.go:123] Gathering logs for kube-controller-manager [14d75f9b9c9d] ...
	I1211 15:38:25.895112    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14d75f9b9c9d"
	I1211 15:38:25.906829    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:38:25.906844    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:38:25.946095    9127 logs.go:123] Gathering logs for kube-apiserver [54bb8dab6d62] ...
	I1211 15:38:25.946112    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54bb8dab6d62"
	I1211 15:38:25.962364    9127 logs.go:123] Gathering logs for coredns [c4d4e2cbd6f6] ...
	I1211 15:38:25.962377    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d4e2cbd6f6"
	I1211 15:38:25.974191    9127 logs.go:123] Gathering logs for kube-scheduler [d34888fb8fe2] ...
	I1211 15:38:25.974203    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34888fb8fe2"
	I1211 15:38:25.991294    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:38:25.991308    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:38:26.027879    9127 logs.go:123] Gathering logs for kube-apiserver [d5c98d25fb5c] ...
	I1211 15:38:26.027891    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c98d25fb5c"
	I1211 15:38:26.043892    9127 logs.go:123] Gathering logs for etcd [02d318e6eaa7] ...
	I1211 15:38:26.043906    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d318e6eaa7"
	I1211 15:38:26.058473    9127 logs.go:123] Gathering logs for kube-proxy [1140a38c8ff2] ...
	I1211 15:38:26.058484    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1140a38c8ff2"
	I1211 15:38:26.070049    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:38:26.070070    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:38:26.082215    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:38:26.082226    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:38:26.086651    9127 logs.go:123] Gathering logs for etcd [6be8bf310db2] ...
	I1211 15:38:26.086657    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be8bf310db2"
	I1211 15:38:26.101532    9127 logs.go:123] Gathering logs for coredns [a954fb185965] ...
	I1211 15:38:26.101543    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a954fb185965"
	I1211 15:38:26.113140    9127 logs.go:123] Gathering logs for kube-scheduler [21b0e2c71d55] ...
	I1211 15:38:26.113151    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21b0e2c71d55"
	I1211 15:38:26.124971    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:38:26.124980    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:38:28.647695    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:26.399824    9116 kapi.go:59] client config for stopped-upgrade-684000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/client.key", CAFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065580b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 15:38:26.400846    9116 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-684000"
	W1211 15:38:26.400852    9116 addons.go:243] addon default-storageclass should already be in state true
	I1211 15:38:26.400863    9116 host.go:66] Checking if "stopped-upgrade-684000" exists ...
	I1211 15:38:26.401535    9116 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 15:38:26.401542    9116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 15:38:26.401548    9116 sshutil.go:53] new ssh client: &{IP:localhost Port:61382 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/id_rsa Username:docker}
	I1211 15:38:26.438210    9116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 15:38:26.515946    9116 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 15:38:26.515960    9116 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 15:38:30.071625    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:30.071684    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:33.649025    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:33.649077    9127 kubeadm.go:597] duration metric: took 4m3.816557083s to restartPrimaryControlPlane
	W1211 15:38:33.649120    9127 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1211 15:38:33.649137    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1211 15:38:34.612826    9127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 15:38:34.618438    9127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 15:38:34.621457    9127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 15:38:34.624132    9127 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 15:38:34.624139    9127 kubeadm.go:157] found existing configuration files:
	
	I1211 15:38:34.624177    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/admin.conf
	I1211 15:38:34.626771    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 15:38:34.626805    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 15:38:34.630235    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/kubelet.conf
	I1211 15:38:34.633330    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 15:38:34.633615    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 15:38:34.636119    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/controller-manager.conf
	I1211 15:38:34.638672    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 15:38:34.638707    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 15:38:34.641521    9127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/scheduler.conf
	I1211 15:38:34.643922    9127 kubeadm.go:163] "https://control-plane.minikube.internal:61515" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61515 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 15:38:34.643948    9127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 15:38:34.646752    9127 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 15:38:34.663853    9127 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1211 15:38:34.663882    9127 kubeadm.go:310] [preflight] Running pre-flight checks
	I1211 15:38:34.717423    9127 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 15:38:34.717471    9127 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 15:38:34.717527    9127 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1211 15:38:34.766985    9127 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 15:38:35.071876    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:35.071906    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:34.771164    9127 out.go:235]   - Generating certificates and keys ...
	I1211 15:38:34.771205    9127 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1211 15:38:34.771247    9127 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1211 15:38:34.771294    9127 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 15:38:34.771329    9127 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1211 15:38:34.771369    9127 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 15:38:34.771402    9127 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1211 15:38:34.771444    9127 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1211 15:38:34.771475    9127 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1211 15:38:34.771516    9127 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 15:38:34.771559    9127 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 15:38:34.771577    9127 kubeadm.go:310] [certs] Using the existing "sa" key
	I1211 15:38:34.771607    9127 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 15:38:34.843859    9127 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 15:38:35.070884    9127 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 15:38:35.223662    9127 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 15:38:35.492817    9127 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 15:38:35.520835    9127 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 15:38:35.522119    9127 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 15:38:35.522145    9127 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1211 15:38:35.613696    9127 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 15:38:35.617008    9127 out.go:235]   - Booting up control plane ...
	I1211 15:38:35.617068    9127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 15:38:35.617113    9127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 15:38:35.617154    9127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 15:38:35.617200    9127 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 15:38:35.617275    9127 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1211 15:38:40.072133    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:40.072154    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:39.618549    9127 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002155 seconds
	I1211 15:38:39.618647    9127 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 15:38:39.624721    9127 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 15:38:40.133584    9127 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 15:38:40.133727    9127 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-031000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 15:38:40.637075    9127 kubeadm.go:310] [bootstrap-token] Using token: o2tufw.jgisq56w1ljinhvv
	I1211 15:38:40.639718    9127 out.go:235]   - Configuring RBAC rules ...
	I1211 15:38:40.639767    9127 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 15:38:40.639811    9127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 15:38:40.641637    9127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 15:38:40.643514    9127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 15:38:40.644475    9127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 15:38:40.645293    9127 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 15:38:40.648307    9127 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 15:38:40.824710    9127 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1211 15:38:41.041186    9127 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1211 15:38:41.041676    9127 kubeadm.go:310] 
	I1211 15:38:41.041708    9127 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1211 15:38:41.041713    9127 kubeadm.go:310] 
	I1211 15:38:41.041753    9127 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1211 15:38:41.041759    9127 kubeadm.go:310] 
	I1211 15:38:41.041823    9127 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1211 15:38:41.041870    9127 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 15:38:41.041901    9127 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 15:38:41.041918    9127 kubeadm.go:310] 
	I1211 15:38:41.041962    9127 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1211 15:38:41.041966    9127 kubeadm.go:310] 
	I1211 15:38:41.041992    9127 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 15:38:41.041998    9127 kubeadm.go:310] 
	I1211 15:38:41.042023    9127 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1211 15:38:41.042093    9127 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 15:38:41.042134    9127 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 15:38:41.042136    9127 kubeadm.go:310] 
	I1211 15:38:41.042197    9127 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 15:38:41.042238    9127 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1211 15:38:41.042241    9127 kubeadm.go:310] 
	I1211 15:38:41.042295    9127 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o2tufw.jgisq56w1ljinhvv \
	I1211 15:38:41.042360    9127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d49e2bb776362b8f3de097afdeb999a6cd72c9e172f75d4b314d4105a8117ae2 \
	I1211 15:38:41.042374    9127 kubeadm.go:310] 	--control-plane 
	I1211 15:38:41.042377    9127 kubeadm.go:310] 
	I1211 15:38:41.042418    9127 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1211 15:38:41.042423    9127 kubeadm.go:310] 
	I1211 15:38:41.042458    9127 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o2tufw.jgisq56w1ljinhvv \
	I1211 15:38:41.042508    9127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d49e2bb776362b8f3de097afdeb999a6cd72c9e172f75d4b314d4105a8117ae2 
	I1211 15:38:41.042563    9127 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 15:38:41.042568    9127 cni.go:84] Creating CNI manager for ""
	I1211 15:38:41.042575    9127 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:38:41.046345    9127 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1211 15:38:41.053330    9127 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1211 15:38:41.056425    9127 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1211 15:38:41.061333    9127 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 15:38:41.061388    9127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 15:38:41.061393    9127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-031000 minikube.k8s.io/updated_at=2024_12_11T15_38_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=running-upgrade-031000 minikube.k8s.io/primary=true
	I1211 15:38:41.104329    9127 ops.go:34] apiserver oom_adj: -16
	I1211 15:38:41.104336    9127 kubeadm.go:1113] duration metric: took 42.998083ms to wait for elevateKubeSystemPrivileges
	I1211 15:38:41.104345    9127 kubeadm.go:394] duration metric: took 4m11.292743208s to StartCluster
	I1211 15:38:41.104354    9127 settings.go:142] acquiring lock: {Name:mk7be6692255448ff6d4be3295ef81ca16b62a5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:38:41.104437    9127 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:38:41.104825    9127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/kubeconfig: {Name:mkbb4a262cd8684046b6244fd6ca1d80f2c17ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:38:41.105037    9127 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:38:41.105107    9127 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 15:38:41.105144    9127 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-031000"
	I1211 15:38:41.105152    9127 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-031000"
	W1211 15:38:41.105157    9127 addons.go:243] addon storage-provisioner should already be in state true
	I1211 15:38:41.105167    9127 host.go:66] Checking if "running-upgrade-031000" exists ...
	I1211 15:38:41.105150    9127 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-031000"
	I1211 15:38:41.105185    9127 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-031000"
	I1211 15:38:41.105245    9127 config.go:182] Loaded profile config "running-upgrade-031000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1211 15:38:41.105580    9127 retry.go:31] will retry after 1.415908019s: connect: dial unix /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/monitor: connect: connection refused
	I1211 15:38:41.108391    9127 out.go:177] * Verifying Kubernetes components...
	I1211 15:38:41.116323    9127 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:38:41.119296    9127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:38:41.123372    9127 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 15:38:41.123380    9127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 15:38:41.123386    9127 sshutil.go:53] new ssh client: &{IP:localhost Port:61422 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/id_rsa Username:docker}
	I1211 15:38:41.218346    9127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 15:38:41.224279    9127 api_server.go:52] waiting for apiserver process to appear ...
	I1211 15:38:41.224345    9127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:38:41.229041    9127 api_server.go:72] duration metric: took 123.995541ms to wait for apiserver process to appear ...
	I1211 15:38:41.229051    9127 api_server.go:88] waiting for apiserver healthz status ...
	I1211 15:38:41.229059    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:41.235479    9127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 15:38:42.524656    9127 kapi.go:59] client config for running-upgrade-031000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/running-upgrade-031000/client.key", CAFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1044bc0b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 15:38:42.524813    9127 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-031000"
	W1211 15:38:42.524821    9127 addons.go:243] addon default-storageclass should already be in state true
	I1211 15:38:42.524843    9127 host.go:66] Checking if "running-upgrade-031000" exists ...
	I1211 15:38:42.525580    9127 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 15:38:42.525587    9127 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 15:38:42.525615    9127 sshutil.go:53] new ssh client: &{IP:localhost Port:61422 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/running-upgrade-031000/id_rsa Username:docker}
	I1211 15:38:42.568397    9127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 15:38:42.657406    9127 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 15:38:42.657418    9127 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 15:38:45.072451    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:45.072519    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:46.230997    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:46.232048    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:50.072980    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:50.073057    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:51.232460    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:51.232483    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:55.073661    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:55.073684    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1211 15:38:56.518098    9116 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1211 15:38:56.521501    9116 out.go:177] * Enabled addons: storage-provisioner
	I1211 15:38:56.232960    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:56.233016    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:56.527294    9116 addons.go:510] duration metric: took 31.553426708s for enable addons: enabled=[storage-provisioner]
	I1211 15:39:00.074474    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:00.074524    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:01.234032    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:01.234073    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:05.074812    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:05.074871    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:06.235137    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:06.235160    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:10.075999    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:10.076034    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:11.236414    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:11.236454    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1211 15:39:12.658878    9127 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1211 15:39:12.663226    9127 out.go:177] * Enabled addons: storage-provisioner
	I1211 15:39:12.670134    9127 addons.go:510] duration metric: took 31.566017333s for enable addons: enabled=[storage-provisioner]
	I1211 15:39:15.077323    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:15.077347    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:16.238145    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:16.238177    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:20.079676    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:20.079738    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:21.238613    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:21.238644    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:25.081952    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:25.082073    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:25.093553    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:39:25.093638    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:25.104355    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:39:25.104427    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:25.119018    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:39:25.119096    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:25.129768    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:39:25.129849    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:25.140112    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:39:25.140192    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:25.150472    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:39:25.150557    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:25.160835    9116 logs.go:282] 0 containers: []
	W1211 15:39:25.160850    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:25.160921    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:25.171058    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:39:25.171075    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:39:25.171081    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:39:25.188613    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:25.188625    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:25.214438    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:39:25.214448    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:25.228379    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:25.228391    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:25.264105    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:39:25.264116    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:39:25.278014    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:39:25.278027    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:39:26.240712    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:26.240755    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:25.300586    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:39:25.300599    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:39:25.311861    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:39:25.311870    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:39:25.323664    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:39:25.323677    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:39:25.338070    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:39:25.338081    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:39:25.350509    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:39:25.350519    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:39:25.361610    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:25.361620    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:25.396835    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:25.396847    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:27.903596    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:31.242803    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:31.242830    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:32.906044    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:32.906188    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:32.919037    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:39:32.919123    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:32.930540    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:39:32.930622    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:32.940689    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:39:32.940774    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:32.950969    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:39:32.951049    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:32.961345    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:39:32.961449    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:32.971719    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:39:32.971806    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:32.986377    9116 logs.go:282] 0 containers: []
	W1211 15:39:32.986389    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:32.986452    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:32.998513    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:39:32.998527    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:39:32.998532    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:39:33.012623    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:39:33.012635    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:39:33.028353    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:39:33.028362    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:39:33.043323    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:39:33.043338    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:39:33.061109    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:33.061119    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:33.065479    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:33.065486    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:33.103196    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:39:33.103210    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:39:33.123764    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:39:33.123777    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:39:33.135574    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:39:33.135586    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:39:33.147535    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:33.147548    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:33.172771    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:39:33.172782    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:33.184602    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:33.184615    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:33.219654    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:39:33.219667    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:39:36.244638    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:36.244682    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:35.735983    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:41.246813    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:41.246910    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:41.258572    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:39:41.258664    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:41.269123    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:39:41.269201    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:41.280344    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:39:41.280423    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:41.291020    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:39:41.291095    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:41.301818    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:39:41.301889    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:41.312706    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:39:41.312774    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:41.327869    9127 logs.go:282] 0 containers: []
	W1211 15:39:41.327880    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:41.327951    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:41.338545    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:39:41.338560    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:39:41.338568    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:39:41.349514    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:39:41.349525    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:39:41.364579    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:39:41.364589    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:39:41.376143    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:39:41.376153    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:39:41.394002    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:39:41.394012    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:39:41.408086    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:39:41.408097    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:39:41.422174    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:39:41.422184    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:39:41.434216    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:39:41.434227    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:39:41.449669    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:41.449680    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:41.474289    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:41.474297    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:41.509199    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:41.509206    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:41.513622    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:41.513631    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:41.555639    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:39:41.555650    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:40.738089    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:40.738413    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:40.761894    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:39:40.762030    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:40.777869    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:39:40.777953    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:40.790696    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:39:40.790783    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:40.801606    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:39:40.801678    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:40.812258    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:39:40.812334    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:40.822775    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:39:40.822852    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:40.833457    9116 logs.go:282] 0 containers: []
	W1211 15:39:40.833471    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:40.833532    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:40.844141    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:39:40.844157    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:40.844165    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:40.883506    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:39:40.883522    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:39:40.895618    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:39:40.895634    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:39:40.910460    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:39:40.910474    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:39:40.922966    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:39:40.922982    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:39:40.934462    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:40.934473    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:40.958442    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:39:40.958452    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:40.969384    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:40.969396    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:41.004539    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:41.004561    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:41.009328    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:39:41.009334    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:39:41.023966    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:39:41.023977    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:39:41.038873    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:39:41.038888    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:39:41.051471    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:39:41.051486    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:39:43.569342    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:44.069609    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:48.569499    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:48.569733    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:48.585592    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:39:48.585687    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:48.597500    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:39:48.597588    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:48.608441    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:39:48.608526    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:48.619339    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:39:48.619446    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:48.634789    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:39:48.634874    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:48.645892    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:39:48.645971    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:48.658182    9116 logs.go:282] 0 containers: []
	W1211 15:39:48.658196    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:48.658267    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:48.668819    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:39:48.668835    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:39:48.668840    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:39:48.682987    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:39:48.682999    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:39:48.696176    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:39:48.696187    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:39:48.708139    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:39:48.708150    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:39:48.722830    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:39:48.722839    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:39:48.739133    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:48.739144    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:48.773127    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:48.773135    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:48.777284    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:48.777292    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:48.813974    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:48.813988    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:48.837473    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:39:48.837481    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:48.848575    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:39:48.848587    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:39:48.863613    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:39:48.863623    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:39:48.875300    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:39:48.875311    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:39:49.071822    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:49.071968    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:49.085392    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:39:49.085483    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:49.096871    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:39:49.096952    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:49.107432    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:39:49.107509    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:49.117601    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:39:49.117687    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:49.128416    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:39:49.128504    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:49.139308    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:39:49.139392    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:49.149188    9127 logs.go:282] 0 containers: []
	W1211 15:39:49.149199    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:49.149279    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:49.160321    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:39:49.160336    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:49.160342    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:49.196934    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:39:49.196955    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:39:49.211673    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:39:49.211687    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:39:49.224303    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:39:49.224316    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:39:49.240341    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:39:49.240351    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:39:49.257868    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:49.257883    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:49.282643    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:39:49.282651    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:49.296649    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:49.296660    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:49.301525    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:49.301532    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:49.336036    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:39:49.336047    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:39:49.350876    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:39:49.350886    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:39:49.362251    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:39:49.362261    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:39:49.373739    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:39:49.373751    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:39:51.886980    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:51.392045    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:56.889127    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:56.889295    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:56.902845    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:39:56.902918    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:56.913681    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:39:56.913751    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:56.924610    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:39:56.924692    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:56.935015    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:39:56.935101    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:56.945366    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:39:56.945439    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:56.955797    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:39:56.955878    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:56.965952    9127 logs.go:282] 0 containers: []
	W1211 15:39:56.965964    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:56.966030    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:56.976773    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:39:56.976789    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:56.976795    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:57.010713    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:57.010724    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:57.045126    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:39:57.045139    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:39:57.059111    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:39:57.059121    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:39:57.072347    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:39:57.072359    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:39:57.084158    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:39:57.084169    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:39:57.099854    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:39:57.099866    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:39:57.112104    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:39:57.112115    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:39:57.131105    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:39:57.131120    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:57.143292    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:57.143302    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:57.148422    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:39:57.148429    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:39:57.160404    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:39:57.160414    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:39:57.177919    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:57.177932    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:56.394125    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:56.394430    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:56.419377    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:39:56.419490    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:56.433684    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:39:56.433772    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:56.445282    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:39:56.445383    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:56.455786    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:39:56.455868    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:56.466512    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:39:56.466591    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:56.476808    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:39:56.476889    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:56.489658    9116 logs.go:282] 0 containers: []
	W1211 15:39:56.489675    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:56.489739    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:56.500236    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:39:56.500257    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:39:56.500262    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:39:56.514554    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:39:56.514567    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:39:56.526000    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:39:56.526013    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:39:56.537819    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:39:56.537833    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:39:56.552498    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:39:56.552511    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:39:56.569900    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:39:56.569911    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:39:56.582661    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:56.582672    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:56.591783    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:39:56.591793    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:39:56.607026    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:39:56.607037    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:56.619128    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:39:56.619142    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:39:56.631186    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:56.631199    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:56.657147    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:56.657159    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:56.691985    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:56.691994    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:59.232716    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:59.704366    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:04.234903    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:04.235422    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:04.275429    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:04.275591    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:04.295827    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:04.295935    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:04.310916    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:40:04.311005    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:04.323298    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:04.323372    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:04.333722    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:04.333814    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:04.344302    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:04.344380    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:04.354511    9116 logs.go:282] 0 containers: []
	W1211 15:40:04.354522    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:04.354589    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:04.370598    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:04.370615    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:04.370621    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:04.382793    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:04.382806    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:04.397324    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:04.397335    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:04.415162    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:04.415173    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:04.450602    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:04.450613    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:04.454915    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:04.454925    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:04.490856    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:04.490867    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:04.505403    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:04.505416    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:04.517454    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:04.517468    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:04.543305    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:04.543315    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:04.557602    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:04.557612    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:04.569584    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:04.569595    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:04.581243    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:04.581253    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:04.706429    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:04.706585    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:04.717839    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:04.717926    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:04.728369    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:04.728451    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:04.739447    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:04.739530    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:04.750473    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:04.750558    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:04.761039    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:04.761110    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:04.772013    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:04.772090    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:04.786749    9127 logs.go:282] 0 containers: []
	W1211 15:40:04.786762    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:04.786830    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:04.797125    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:04.797142    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:04.797147    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:04.831920    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:04.831929    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:04.846427    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:04.846438    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:04.862231    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:04.862245    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:04.875958    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:04.875969    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:04.899621    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:04.899631    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:04.911727    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:04.911738    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:04.929446    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:04.929456    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:04.940677    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:04.940687    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:04.945235    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:04.945242    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:04.985796    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:04.985810    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:04.997933    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:04.997947    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:05.009468    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:05.009482    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:07.525882    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:07.095539    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:12.527976    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:12.528095    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:12.545802    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:12.545885    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:12.556255    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:12.556335    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:12.566689    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:12.566761    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:12.576872    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:12.576942    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:12.587425    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:12.587500    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:12.598130    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:12.598199    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:12.612045    9127 logs.go:282] 0 containers: []
	W1211 15:40:12.612056    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:12.612120    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:12.626292    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:12.626313    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:12.626318    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:12.641224    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:12.641233    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:12.665463    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:12.665486    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:12.678531    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:12.678541    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:12.683212    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:12.683220    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:12.721191    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:12.721202    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:12.735682    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:12.735695    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:12.747965    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:12.747976    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:12.760140    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:12.760155    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:12.777654    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:12.777664    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:12.789841    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:12.789852    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:12.825089    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:12.825098    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:12.843131    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:12.843142    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:12.097610    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:12.097916    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:12.125921    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:12.126033    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:12.141913    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:12.142008    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:12.158462    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:40:12.158549    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:12.168751    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:12.168835    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:12.179304    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:12.179383    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:12.190334    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:12.190416    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:12.200226    9116 logs.go:282] 0 containers: []
	W1211 15:40:12.200237    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:12.200305    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:12.211169    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:12.211184    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:12.211190    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:12.223411    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:12.223422    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:12.242381    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:12.242389    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:12.254084    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:12.254098    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:12.289870    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:12.289880    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:12.295390    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:12.295398    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:12.310352    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:12.310361    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:12.324071    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:12.324081    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:12.348543    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:12.348563    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:12.360180    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:12.360200    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:12.396382    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:12.396395    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:12.408740    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:12.408754    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:12.422452    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:12.422467    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:14.942974    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:15.359815    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:19.945062    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:19.945244    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:19.959293    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:19.959380    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:19.970798    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:19.970885    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:19.981588    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:40:19.981667    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:19.991964    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:19.992034    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:20.003159    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:20.003248    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:20.013953    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:20.014027    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:20.028292    9116 logs.go:282] 0 containers: []
	W1211 15:40:20.028304    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:20.028372    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:20.045884    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:20.045901    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:20.045907    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:20.080738    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:20.080747    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:20.095296    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:20.095306    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:20.109812    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:20.109822    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:20.125345    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:20.125355    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:20.142468    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:20.142479    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:20.153628    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:20.153638    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:20.164997    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:20.165008    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:20.169264    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:20.169270    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:20.209180    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:20.209193    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:20.221007    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:20.221018    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:20.232595    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:20.232605    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:20.244006    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:20.244016    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:20.361901    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:20.362033    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:20.373785    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:20.373862    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:20.385099    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:20.385175    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:20.395797    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:20.395864    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:20.406534    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:20.406615    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:20.424063    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:20.424143    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:20.434759    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:20.434826    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:20.444903    9127 logs.go:282] 0 containers: []
	W1211 15:40:20.444915    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:20.444981    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:20.457078    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:20.457093    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:20.457098    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:20.473047    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:20.473058    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:20.493690    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:20.493700    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:20.508321    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:20.508332    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:20.532774    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:20.532785    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:20.544293    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:20.544305    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:20.582351    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:20.582361    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:20.593924    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:20.593934    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:20.613245    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:20.613259    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:20.627285    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:20.627295    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:20.643243    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:20.643257    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:20.655767    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:20.655777    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:20.691101    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:20.691109    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:23.197266    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:22.769269    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:28.199435    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:28.199595    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:28.211708    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:28.211797    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:28.222695    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:28.222766    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:28.233552    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:28.233621    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:28.246218    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:28.246299    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:28.257340    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:28.257417    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:28.268075    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:28.268158    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:28.278237    9127 logs.go:282] 0 containers: []
	W1211 15:40:28.278247    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:28.278312    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:28.289348    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:28.289362    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:28.289368    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:28.303325    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:28.303336    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:28.315507    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:28.315521    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:28.331259    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:28.331270    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:28.354788    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:28.354798    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:28.367629    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:28.367640    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:28.382491    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:28.382501    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:28.387608    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:28.387615    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:28.424642    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:28.424653    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:28.435913    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:28.435922    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:28.447326    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:28.447337    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:28.464831    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:28.464840    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:28.481497    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:28.481511    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:27.769810    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:27.769943    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:27.786550    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:27.786637    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:27.798098    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:27.798186    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:27.808827    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:40:27.808903    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:27.819062    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:27.819154    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:27.829800    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:27.829882    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:27.840209    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:27.840289    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:27.850306    9116 logs.go:282] 0 containers: []
	W1211 15:40:27.850317    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:27.850375    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:27.861228    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:27.861248    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:27.861254    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:27.872796    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:27.872809    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:27.895641    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:27.895651    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:27.899714    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:27.899722    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:27.935049    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:27.935060    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:27.949064    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:27.949074    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:27.960603    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:27.960613    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:27.972523    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:27.972532    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:27.990961    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:27.990973    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:28.003461    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:28.003474    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:28.038555    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:28.038566    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:28.052834    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:28.052844    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:28.064447    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:28.064457    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:31.018706    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:30.580853    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:36.020810    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:36.020992    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:36.031803    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:36.031887    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:36.042468    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:36.042542    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:36.055674    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:36.055754    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:36.065958    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:36.066051    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:36.076211    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:36.076280    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:36.087142    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:36.087218    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:36.097533    9127 logs.go:282] 0 containers: []
	W1211 15:40:36.097545    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:36.097610    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:36.108284    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:36.108299    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:36.108305    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:36.112995    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:36.113006    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:36.146999    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:36.147013    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:36.161239    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:36.161253    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:36.178425    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:36.178435    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:36.190523    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:36.190534    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:36.214037    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:36.214045    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:36.225772    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:36.225785    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:36.263121    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:36.263139    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:36.281417    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:36.281432    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:36.295115    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:36.295132    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:36.308361    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:36.308377    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:36.333575    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:36.333592    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:38.849373    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:35.582999    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:35.583140    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:35.596056    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:35.596149    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:35.607036    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:35.607112    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:35.617249    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:40:35.617320    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:35.627749    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:35.627830    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:35.637917    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:35.638005    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:35.648298    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:35.648376    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:35.658359    9116 logs.go:282] 0 containers: []
	W1211 15:40:35.658376    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:35.658437    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:35.669053    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:35.669070    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:35.669075    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:35.680418    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:35.680430    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:35.716062    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:35.716070    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:35.720355    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:35.720363    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:35.756684    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:35.756695    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:35.770916    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:35.770930    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:35.785112    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:35.785125    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:35.803062    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:35.803075    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:35.814683    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:35.814694    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:35.830408    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:35.830422    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:35.845886    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:35.845896    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:35.857659    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:35.857669    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:35.882646    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:35.882655    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:38.396435    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:43.849908    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:43.850051    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:43.861271    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:43.861350    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:43.398736    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:43.398944    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:43.416154    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:43.416261    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:43.428125    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:43.428201    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:43.441406    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:40:43.441494    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:43.451658    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:43.451740    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:43.462543    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:43.462643    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:43.478373    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:43.478450    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:43.488774    9116 logs.go:282] 0 containers: []
	W1211 15:40:43.488787    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:43.488854    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:43.499142    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:43.499160    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:43.499166    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:43.532853    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:43.532867    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:43.537918    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:43.537929    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:43.562523    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:43.562533    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:43.576626    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:43.576641    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:43.591489    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:43.591500    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:43.603674    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:43.603689    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:43.614882    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:40:43.614893    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:40:43.625918    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:40:43.625928    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:40:43.637621    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:43.637631    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:43.655098    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:43.655112    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:43.670148    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:43.670160    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:43.682706    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:43.682718    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:43.717985    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:43.717996    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:43.732154    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:43.732167    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:43.871792    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:43.871867    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:43.882507    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:43.882590    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:43.893372    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:43.893451    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:43.904182    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:43.904259    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:43.914687    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:43.914762    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:43.925023    9127 logs.go:282] 0 containers: []
	W1211 15:40:43.925033    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:43.925094    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:43.935335    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:43.935351    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:43.935357    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:43.939893    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:43.939900    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:43.951207    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:43.951220    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:43.963167    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:43.963177    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:43.975572    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:43.975582    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:43.993027    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:43.993037    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:44.004243    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:44.004258    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:44.039201    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:44.039209    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:44.055140    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:44.055150    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:44.069084    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:44.069094    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:44.085347    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:44.085363    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:44.110711    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:44.110727    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:44.122154    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:44.122170    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:46.657861    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:46.247391    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:51.659843    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:51.660000    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:51.674817    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:51.674902    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:51.685254    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:51.685335    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:51.697236    9127 logs.go:282] 2 containers: [cccbdb12b2cf ca88055a8d39]
	I1211 15:40:51.697317    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:51.707724    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:51.707796    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:51.717644    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:51.717732    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:51.736096    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:51.736172    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:51.746187    9127 logs.go:282] 0 containers: []
	W1211 15:40:51.746198    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:51.746265    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:51.756816    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:51.756833    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:51.756839    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:51.768282    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:51.768293    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:51.793198    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:51.793206    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:51.828482    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:51.828494    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:51.833336    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:51.833343    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:51.869740    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:51.869751    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:51.887968    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:51.887979    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:51.902363    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:51.902374    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:51.917314    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:51.917324    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:51.928639    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:51.928649    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:51.940486    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:51.940495    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:51.952238    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:51.952248    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:51.964079    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:51.964089    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:40:51.249999    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:51.250266    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:51.269246    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:51.269346    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:51.283311    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:51.283399    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:51.299223    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:40:51.299304    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:51.310216    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:51.310296    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:51.321221    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:51.321303    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:51.332482    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:51.332559    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:51.342495    9116 logs.go:282] 0 containers: []
	W1211 15:40:51.342511    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:51.342580    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:51.353361    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:51.353381    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:51.353387    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:51.391545    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:40:51.391559    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:40:51.403001    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:51.403017    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:51.415759    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:51.415770    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:51.433273    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:51.433283    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:51.458589    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:51.458598    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:51.494722    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:51.494735    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:51.499543    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:40:51.499552    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:40:51.511560    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:51.511571    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:51.526036    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:51.526048    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:51.556608    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:51.556618    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:51.568286    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:51.568295    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:51.582261    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:51.582273    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:51.596144    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:51.596155    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:51.607962    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:51.607973    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:54.121282    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:54.487047    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:59.123370    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:59.123495    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:59.135335    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:59.135426    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:59.150388    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:59.150471    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:59.161102    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:40:59.161188    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:59.171775    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:59.171852    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:59.182620    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:59.182697    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:59.192917    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:59.193006    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:59.203219    9116 logs.go:282] 0 containers: []
	W1211 15:40:59.203229    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:59.203296    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:59.213384    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:59.213402    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:59.213408    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:59.218212    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:59.218219    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:59.239541    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:59.239551    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:59.251002    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:59.251011    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:59.285419    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:59.285428    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:59.299618    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:40:59.299628    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:40:59.311504    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:59.311517    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:59.325501    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:59.325516    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:59.337181    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:59.337191    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:59.360379    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:59.360397    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:59.372864    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:59.372875    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:59.407932    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:40:59.407945    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:40:59.422115    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:59.422129    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:59.439931    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:59.439942    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:59.465135    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:59.465143    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:59.487647    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:59.487751    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:59.498750    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:40:59.498828    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:59.509496    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:40:59.509575    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:59.520460    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:40:59.520537    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:59.532401    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:40:59.532485    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:59.548504    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:40:59.548583    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:59.559604    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:40:59.559678    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:59.569795    9127 logs.go:282] 0 containers: []
	W1211 15:40:59.569806    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:59.569866    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:59.580714    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:40:59.580732    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:59.580738    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:59.605954    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:59.605962    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:59.641746    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:40:59.641757    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:40:59.660659    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:40:59.660668    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:40:59.671826    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:40:59.671838    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:40:59.683690    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:40:59.683701    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:40:59.695675    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:40:59.695687    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:40:59.715665    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:40:59.715676    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:40:59.727457    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:40:59.727468    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:40:59.739135    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:59.739146    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:59.772319    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:40:59.772328    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:40:59.786753    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:40:59.786767    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:40:59.802602    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:40:59.802612    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:59.814224    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:59.814238    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:59.819388    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:40:59.819394    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:02.342782    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:01.979724    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:07.344981    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:07.345130    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:07.356012    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:07.356096    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:07.366319    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:07.366394    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:07.377688    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:07.377762    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:07.388689    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:07.388759    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:07.399396    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:07.399474    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:07.409931    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:07.410010    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:07.420312    9127 logs.go:282] 0 containers: []
	W1211 15:41:07.420324    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:07.420386    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:07.435953    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:07.435970    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:07.435976    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:07.447602    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:07.447614    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:07.465111    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:07.465122    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:07.477639    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:07.477651    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:07.482743    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:07.482751    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:07.494809    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:07.494819    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:07.506635    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:07.506649    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:07.519383    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:07.519393    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:07.532270    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:07.532281    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:07.567614    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:07.567624    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:07.608436    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:07.608448    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:07.623331    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:07.623342    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:07.637222    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:07.637234    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:07.650143    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:07.650156    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:07.675542    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:07.675554    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:06.981877    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:06.982005    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:06.992873    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:06.992961    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:07.006406    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:07.006486    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:07.017080    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:07.017155    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:07.027871    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:07.027940    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:07.038228    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:07.038313    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:07.048911    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:07.048984    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:07.059814    9116 logs.go:282] 0 containers: []
	W1211 15:41:07.059832    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:07.059902    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:07.070540    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:07.070557    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:07.070564    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:07.105725    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:07.105735    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:07.119531    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:07.119544    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:07.134416    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:07.134429    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:07.145787    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:07.145798    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:07.160113    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:07.160126    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:07.171894    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:07.171907    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:07.196109    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:07.196122    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:07.207660    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:07.207672    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:07.212523    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:07.212530    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:07.247536    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:07.247547    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:07.258718    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:07.258731    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:07.270337    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:07.270347    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:07.286036    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:07.286052    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:07.298317    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:07.298328    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:09.822675    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:10.200669    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:14.824829    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:14.824954    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:14.836278    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:14.836367    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:14.847028    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:14.847104    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:14.857843    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:14.857920    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:14.868311    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:14.868381    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:14.878969    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:14.879038    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:14.889303    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:14.889395    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:14.910486    9116 logs.go:282] 0 containers: []
	W1211 15:41:14.910497    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:14.910559    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:14.921314    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:14.921333    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:14.921342    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:14.949359    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:14.949384    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:14.973656    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:14.973668    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:14.985149    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:14.985163    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:15.002400    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:15.002411    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:15.014112    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:15.014138    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:15.025552    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:15.025564    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:15.037927    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:15.037938    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:15.042140    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:15.042148    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:15.079310    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:15.079321    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:15.093255    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:15.093266    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:15.105071    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:15.105084    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:15.140372    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:15.140379    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:15.151872    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:15.151883    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:15.170357    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:15.170369    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:15.203169    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:15.203275    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:15.214186    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:15.214267    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:15.224734    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:15.224813    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:15.235584    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:15.235665    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:15.246124    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:15.246198    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:15.256720    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:15.256802    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:15.267154    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:15.267229    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:15.278018    9127 logs.go:282] 0 containers: []
	W1211 15:41:15.278029    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:15.278096    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:15.288980    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:15.288997    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:15.289003    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:15.294257    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:15.294264    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:15.305642    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:15.305653    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:15.317892    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:15.317903    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:15.351327    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:15.351336    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:15.366663    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:15.366673    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:15.378185    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:15.378196    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:15.399765    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:15.399775    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:15.411498    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:15.411508    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:15.430339    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:15.430349    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:15.442033    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:15.442044    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:15.465496    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:15.465504    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:15.499351    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:15.499362    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:15.513754    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:15.513765    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:15.525412    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:15.525424    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:18.043154    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:17.684041    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:23.044963    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:23.045063    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:23.056652    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:23.056784    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:23.069707    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:23.069780    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:23.080437    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:23.080516    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:23.091366    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:23.091437    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:23.103199    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:23.103276    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:23.113577    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:23.113650    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:23.124155    9127 logs.go:282] 0 containers: []
	W1211 15:41:23.124170    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:23.124226    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:23.134825    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:23.134843    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:23.134848    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:23.168694    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:23.168703    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:23.204304    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:23.204314    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:23.215915    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:23.215928    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:23.227428    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:23.227443    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:23.252552    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:23.252567    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:23.267059    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:23.267076    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:23.278218    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:23.278233    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:23.302192    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:23.302201    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:23.314526    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:23.314536    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:23.326499    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:23.326512    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:23.337950    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:23.337962    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:23.342427    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:23.342432    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:23.358220    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:23.358233    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:23.370948    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:23.370958    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:22.686163    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:22.686520    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:22.712189    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:22.712329    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:22.730297    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:22.730391    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:22.743743    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:22.743840    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:22.754879    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:22.754956    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:22.765149    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:22.765229    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:22.776766    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:22.776842    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:22.788244    9116 logs.go:282] 0 containers: []
	W1211 15:41:22.788256    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:22.788326    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:22.799515    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:22.799534    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:22.799539    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:22.813995    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:22.814005    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:22.825966    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:22.825979    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:22.837667    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:22.837678    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:22.849593    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:22.849607    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:22.864235    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:22.864246    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:22.882178    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:22.882189    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:22.886366    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:22.886375    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:22.909904    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:22.909916    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:22.921279    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:22.921293    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:22.955607    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:22.955616    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:23.021165    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:23.021176    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:23.035210    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:23.035223    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:23.046633    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:23.046644    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:23.059455    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:23.059464    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:25.887965    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:25.573518    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:30.890061    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:30.890181    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:30.904980    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:30.905063    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:30.916627    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:30.916711    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:30.929149    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:30.929232    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:30.940185    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:30.940260    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:30.951309    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:30.951388    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:30.961965    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:30.962040    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:30.972314    9127 logs.go:282] 0 containers: []
	W1211 15:41:30.972325    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:30.972393    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:30.986095    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:30.986113    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:30.986118    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:30.997776    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:30.997790    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:31.009850    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:31.009862    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:31.022387    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:31.022398    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:31.057089    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:31.057098    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:31.090963    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:31.090972    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:31.105422    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:31.105432    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:31.118470    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:31.118482    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:31.130682    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:31.130691    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:31.135172    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:31.135179    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:31.149386    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:31.149396    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:31.164872    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:31.164883    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:31.190560    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:31.190571    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:31.205320    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:31.205332    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:31.222447    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:31.222458    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:33.735205    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:30.575470    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:30.575726    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:30.594711    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:30.594822    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:30.608891    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:30.608981    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:30.620992    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:30.621081    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:30.634933    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:30.635025    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:30.645659    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:30.645744    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:30.656506    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:30.656583    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:30.666657    9116 logs.go:282] 0 containers: []
	W1211 15:41:30.666678    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:30.666745    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:30.676829    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:30.676847    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:30.676854    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:30.688318    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:30.688329    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:30.700603    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:30.700616    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:30.712115    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:30.712126    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:30.729259    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:30.729270    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:30.740940    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:30.740951    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:30.764772    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:30.764779    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:30.779179    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:30.779190    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:30.816311    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:30.816322    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:30.831278    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:30.831289    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:30.867207    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:30.867215    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:30.879051    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:30.879062    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:30.883883    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:30.883892    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:30.899958    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:30.899971    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:30.917552    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:30.917562    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:33.434363    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:38.737252    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:38.737355    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:38.748959    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:38.749039    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:38.761034    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:38.761142    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:38.772497    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:38.772584    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:38.784177    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:38.784254    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:38.795103    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:38.795185    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:38.805880    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:38.805960    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:38.815974    9127 logs.go:282] 0 containers: []
	W1211 15:41:38.815987    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:38.816060    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:38.826574    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:38.826591    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:38.826596    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:38.838538    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:38.838548    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:38.853979    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:38.853993    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:38.434646    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:38.434840    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:38.449383    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:38.449474    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:38.460701    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:38.460781    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:38.471540    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:38.471615    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:38.482917    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:38.482996    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:38.493289    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:38.493369    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:38.503717    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:38.503789    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:38.513689    9116 logs.go:282] 0 containers: []
	W1211 15:41:38.513699    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:38.513765    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:38.524134    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:38.524156    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:38.524163    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:38.559047    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:38.559058    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:38.573162    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:38.573174    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:38.586934    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:38.586945    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:38.598716    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:38.598731    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:38.610253    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:38.610263    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:38.621799    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:38.621810    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:38.657402    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:38.657412    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:38.662042    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:38.662049    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:38.673861    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:38.673872    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:38.687380    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:38.687395    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:38.705616    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:38.705626    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:38.720735    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:38.720747    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:38.746811    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:38.746830    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:38.759545    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:38.759558    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:38.865379    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:38.865389    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:38.877588    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:38.877598    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:38.889631    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:38.889646    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:38.923014    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:38.923029    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:38.937760    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:38.937778    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:38.972779    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:38.972790    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:38.990949    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:38.990964    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:39.003099    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:39.003112    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:39.014970    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:39.014980    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:39.033536    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:39.033549    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:39.059038    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:39.059053    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:39.071121    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:39.071131    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:41.578169    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:41.280823    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:46.579251    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:46.579362    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:46.591049    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:46.591138    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:46.602131    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:46.602209    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:46.613890    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:46.613977    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:46.625343    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:46.625421    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:46.636791    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:46.636869    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:46.647744    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:46.647825    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:46.660052    9127 logs.go:282] 0 containers: []
	W1211 15:41:46.660063    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:46.660132    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:46.670925    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:46.670941    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:46.670947    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:46.706778    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:46.706789    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:46.722270    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:46.722281    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:46.756316    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:46.756333    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:46.762193    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:46.762201    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:46.779091    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:46.779102    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:46.790406    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:46.790415    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:46.804706    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:46.804715    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:46.820199    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:46.820213    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:46.835452    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:46.835465    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:46.852565    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:46.852576    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:46.864427    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:46.864437    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:46.884233    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:46.884243    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:46.908092    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:46.908102    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:46.920812    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:46.920823    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:46.281526    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:46.281834    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:46.306163    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:46.306300    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:46.325107    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:46.325201    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:46.339154    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:46.339257    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:46.350508    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:46.350587    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:46.360993    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:46.361068    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:46.371105    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:46.371183    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:46.381327    9116 logs.go:282] 0 containers: []
	W1211 15:41:46.381338    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:46.381400    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:46.392204    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:46.392218    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:46.392224    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:46.406217    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:46.406231    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:46.442526    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:46.442537    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:46.447428    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:46.447433    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:46.461487    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:46.461497    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:46.472723    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:46.472751    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:46.484822    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:46.484833    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:46.499536    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:46.499551    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:46.534389    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:46.534404    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:46.552595    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:46.552606    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:46.564312    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:46.564322    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:46.576410    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:46.576421    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:46.589459    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:46.589471    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:46.602200    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:46.602210    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:46.618857    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:46.618868    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:49.147200    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:49.448156    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:54.149255    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:54.149441    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:54.164721    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:54.164803    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:54.175344    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:54.175421    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:54.185815    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:54.185901    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:54.196734    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:54.196814    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:54.206884    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:54.206970    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:54.218052    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:54.218133    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:54.232781    9116 logs.go:282] 0 containers: []
	W1211 15:41:54.232796    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:54.232866    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:54.242992    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:54.243011    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:54.243017    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:54.247563    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:54.247572    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:54.282927    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:54.282937    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:54.295272    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:54.295284    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:54.310835    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:54.310846    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:54.323070    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:54.323080    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:54.334569    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:54.334580    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:54.349448    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:54.349462    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:54.361026    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:54.361040    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:54.372855    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:54.372865    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:54.384174    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:54.384185    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:54.418743    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:54.418760    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:54.433432    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:54.433447    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:54.445257    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:54.445269    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:54.464085    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:54.464093    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:54.448418    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:54.448513    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:54.462067    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:41:54.462148    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:54.473880    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:41:54.473959    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:54.484890    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:41:54.484973    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:54.495713    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:41:54.495786    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:54.506241    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:41:54.506310    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:54.520568    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:41:54.520639    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:54.532358    9127 logs.go:282] 0 containers: []
	W1211 15:41:54.532374    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:54.532442    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:54.543191    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:41:54.543213    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:41:54.543218    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:41:54.560959    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:54.560973    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:54.583879    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:41:54.583885    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:41:54.598325    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:41:54.598335    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:41:54.613813    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:41:54.613826    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:41:54.625518    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:41:54.625530    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:41:54.637767    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:54.637776    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:54.670993    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:54.671010    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:54.706794    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:41:54.706807    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:41:54.719005    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:41:54.719015    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:41:54.730895    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:41:54.730906    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:41:54.742796    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:41:54.742807    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:54.762365    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:54.762376    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:54.767316    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:41:54.767324    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:41:54.789022    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:41:54.789033    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:41:57.309394    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:56.991127    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:02.310117    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:02.310216    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:02.322099    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:42:02.322201    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:02.333899    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:42:02.333980    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:02.345893    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:42:02.345971    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:02.356050    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:42:02.356132    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:02.366456    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:42:02.366535    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:02.377673    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:42:02.377756    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:02.387944    9127 logs.go:282] 0 containers: []
	W1211 15:42:02.387958    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:02.388027    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:02.398294    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:42:02.398311    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:42:02.398316    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:42:02.412342    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:02.412354    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:02.436255    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:02.436265    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:02.440678    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:42:02.440686    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:42:02.455879    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:42:02.455890    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:42:02.471186    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:42:02.471199    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:42:02.482760    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:42:02.482772    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:02.494940    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:42:02.494953    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:42:02.507649    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:42:02.507664    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:42:02.519944    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:42:02.519955    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:42:02.533576    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:42:02.533589    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:42:02.552076    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:02.552090    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:02.586069    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:02.586080    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:02.624283    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:42:02.624294    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:42:02.636039    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:42:02.636056    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:42:01.993269    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:01.993395    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:02.004534    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:42:02.004619    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:02.015128    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:42:02.015208    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:02.025674    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:42:02.025748    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:02.037292    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:42:02.037370    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:02.049712    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:42:02.049789    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:02.059868    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:42:02.059940    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:02.070273    9116 logs.go:282] 0 containers: []
	W1211 15:42:02.070283    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:02.070344    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:02.080747    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:42:02.080767    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:02.080774    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:02.085333    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:42:02.085340    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:42:02.099677    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:42:02.099689    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:42:02.117074    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:42:02.117084    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:42:02.128857    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:42:02.128867    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:42:02.144111    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:42:02.144125    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:42:02.171008    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:02.171019    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:02.206847    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:42:02.206856    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:42:02.218599    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:42:02.218613    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:42:02.231094    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:42:02.231104    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:42:02.242511    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:02.242524    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:02.266871    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:02.266879    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:02.303877    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:42:02.303889    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:42:02.316187    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:42:02.316199    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:42:02.328523    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:42:02.328535    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:04.844868    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:05.149440    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:09.845714    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:09.845841    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:09.856862    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:42:09.856952    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:09.867935    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:42:09.868024    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:09.878686    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:42:09.878769    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:09.889175    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:42:09.889249    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:09.899737    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:42:09.899817    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:09.910249    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:42:09.910322    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:09.923679    9116 logs.go:282] 0 containers: []
	W1211 15:42:09.923691    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:09.923754    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:09.934004    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:42:09.934022    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:09.934027    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:09.938925    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:09.938935    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:09.972746    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:42:09.972759    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:42:09.988010    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:42:09.988022    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:42:10.006924    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:42:10.006936    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:10.018514    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:10.018526    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:10.055036    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:42:10.055046    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:42:10.069617    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:10.069629    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:10.093245    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:42:10.093252    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:42:10.104319    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:42:10.104333    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:42:10.116323    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:42:10.116335    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:42:10.128245    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:42:10.128256    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:42:10.146456    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:42:10.146467    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:42:10.158589    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:42:10.158602    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:42:10.174268    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:42:10.174281    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:42:10.149898    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:10.149996    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:10.161379    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:42:10.161463    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:10.175221    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:42:10.175302    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:10.186431    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:42:10.186516    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:10.197143    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:42:10.197226    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:10.212067    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:42:10.212151    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:10.222449    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:42:10.222525    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:10.233177    9127 logs.go:282] 0 containers: []
	W1211 15:42:10.233188    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:10.233267    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:10.243773    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:42:10.243792    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:10.243797    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:10.268502    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:42:10.268510    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:42:10.287156    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:42:10.287166    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:42:10.298852    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:10.298863    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:10.333555    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:42:10.333564    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:42:10.345867    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:42:10.345879    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:42:10.361909    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:42:10.361918    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:42:10.379772    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:42:10.379786    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:42:10.391561    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:10.391576    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:10.426898    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:42:10.426913    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:42:10.439253    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:42:10.439266    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:42:10.451066    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:42:10.451077    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:42:10.466500    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:42:10.466511    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:10.478187    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:10.478200    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:10.483418    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:42:10.483428    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:42:13.000207    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:12.688690    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:18.002342    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:18.002467    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:18.026146    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:42:18.026230    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:18.042685    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:42:18.042769    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:18.054404    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:42:18.054487    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:18.065035    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:42:18.065117    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:18.075385    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:42:18.075459    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:18.087131    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:42:18.087211    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:18.101015    9127 logs.go:282] 0 containers: []
	W1211 15:42:18.101026    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:18.101091    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:18.111945    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:42:18.111962    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:42:18.111967    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:42:18.126976    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:42:18.126986    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:42:18.138588    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:42:18.138599    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:42:18.150591    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:42:18.150605    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:42:18.162057    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:42:18.162067    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:42:18.173466    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:18.173480    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:18.177895    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:42:18.177901    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:42:18.191565    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:42:18.191575    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:42:18.206465    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:42:18.206477    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:42:18.222737    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:42:18.222747    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:42:18.239948    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:18.239957    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:18.275108    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:42:18.275118    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:42:18.287104    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:18.287117    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:18.312516    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:42:18.312524    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:18.324370    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:18.324386    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:17.690866    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:17.690988    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:17.706169    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:42:17.706257    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:17.719667    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:42:17.719747    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:17.730523    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:42:17.730595    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:17.741112    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:42:17.741187    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:17.751757    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:42:17.751838    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:17.763530    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:42:17.763603    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:17.773167    9116 logs.go:282] 0 containers: []
	W1211 15:42:17.773184    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:17.773245    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:17.783961    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:42:17.783984    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:42:17.783990    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:42:17.795768    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:42:17.795778    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:42:17.810716    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:42:17.810727    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:42:17.828067    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:17.828080    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:17.863680    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:17.863701    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:17.897611    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:42:17.897625    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:42:17.913088    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:42:17.913098    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:42:17.924464    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:42:17.924477    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:42:17.935772    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:42:17.935786    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:42:17.951735    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:42:17.951747    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:42:17.963445    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:17.963457    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:17.986654    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:42:17.986663    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:17.998326    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:17.998338    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:18.003500    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:42:18.003509    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:42:18.016538    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:42:18.016553    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:42:20.860891    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:20.534616    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:25.536715    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:25.541153    9116 out.go:201] 
	W1211 15:42:25.544897    9116 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1211 15:42:25.544903    9116 out.go:270] * 
	W1211 15:42:25.545668    9116 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:42:25.557010    9116 out.go:201] 
	I1211 15:42:25.861242    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:25.861318    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:25.873050    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:42:25.873130    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:25.884022    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:42:25.884101    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:25.895370    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:42:25.895457    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:25.907535    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:42:25.907583    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:25.919982    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:42:25.920032    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:25.931536    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:42:25.931604    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:25.943101    9127 logs.go:282] 0 containers: []
	W1211 15:42:25.943112    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:25.943163    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:25.955378    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:42:25.955395    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:25.955400    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:25.994976    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:42:25.994990    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:42:26.010402    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:42:26.010436    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:42:26.027013    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:42:26.027023    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:42:26.042064    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:26.042076    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:26.077759    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:42:26.077778    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:42:26.090065    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:42:26.090074    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:42:26.102391    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:42:26.102408    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:42:26.113994    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:42:26.114004    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:26.127625    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:26.127639    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:26.152887    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:26.152902    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:26.157560    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:42:26.157572    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:42:26.171218    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:42:26.171228    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:42:26.183539    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:42:26.183553    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:42:26.200248    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:42:26.200259    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:42:28.720424    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:33.721166    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:33.721353    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:33.739646    9127 logs.go:282] 1 containers: [a3fa0d766793]
	I1211 15:42:33.739752    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:33.753243    9127 logs.go:282] 1 containers: [4b2190ed09b4]
	I1211 15:42:33.753332    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:33.764746    9127 logs.go:282] 4 containers: [c8d8a1d9479a db28b2c64217 cccbdb12b2cf ca88055a8d39]
	I1211 15:42:33.764834    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:33.775177    9127 logs.go:282] 1 containers: [497502e201e2]
	I1211 15:42:33.775252    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:33.785629    9127 logs.go:282] 1 containers: [99becbb9ed95]
	I1211 15:42:33.785713    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:33.796033    9127 logs.go:282] 1 containers: [41ffdfa24618]
	I1211 15:42:33.796109    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:33.806183    9127 logs.go:282] 0 containers: []
	W1211 15:42:33.806196    9127 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:33.806258    9127 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:33.816765    9127 logs.go:282] 1 containers: [7cc45d9c1547]
	I1211 15:42:33.816783    9127 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:33.816789    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:33.851634    9127 logs.go:123] Gathering logs for storage-provisioner [7cc45d9c1547] ...
	I1211 15:42:33.851641    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cc45d9c1547"
	I1211 15:42:33.866352    9127 logs.go:123] Gathering logs for coredns [c8d8a1d9479a] ...
	I1211 15:42:33.866367    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8d8a1d9479a"
	I1211 15:42:33.878030    9127 logs.go:123] Gathering logs for coredns [cccbdb12b2cf] ...
	I1211 15:42:33.878043    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cccbdb12b2cf"
	I1211 15:42:33.889746    9127 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:33.889755    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:33.913948    9127 logs.go:123] Gathering logs for kube-proxy [99becbb9ed95] ...
	I1211 15:42:33.913954    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99becbb9ed95"
	I1211 15:42:33.925676    9127 logs.go:123] Gathering logs for kube-controller-manager [41ffdfa24618] ...
	I1211 15:42:33.925685    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41ffdfa24618"
	I1211 15:42:33.942991    9127 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:33.943004    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:33.979997    9127 logs.go:123] Gathering logs for kube-apiserver [a3fa0d766793] ...
	I1211 15:42:33.980011    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fa0d766793"
	I1211 15:42:33.995105    9127 logs.go:123] Gathering logs for etcd [4b2190ed09b4] ...
	I1211 15:42:33.995114    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b2190ed09b4"
	I1211 15:42:34.008818    9127 logs.go:123] Gathering logs for coredns [ca88055a8d39] ...
	I1211 15:42:34.008828    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca88055a8d39"
	I1211 15:42:34.020908    9127 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:34.020918    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:34.025569    9127 logs.go:123] Gathering logs for coredns [db28b2c64217] ...
	I1211 15:42:34.025579    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db28b2c64217"
	I1211 15:42:34.037572    9127 logs.go:123] Gathering logs for kube-scheduler [497502e201e2] ...
	I1211 15:42:34.037582    9127 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 497502e201e2"
	I1211 15:42:34.052444    9127 logs.go:123] Gathering logs for container status ...
	I1211 15:42:34.052454    9127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:36.567580    9127 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:41.569699    9127 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:41.573763    9127 out.go:201] 
	W1211 15:42:41.577575    9127 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1211 15:42:41.577582    9127 out.go:270] * 
	W1211 15:42:41.578115    9127 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:42:41.589686    9127 out.go:201] 
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-12-11 23:33:26 UTC, ends at Wed 2024-12-11 23:42:57 UTC. --
	Dec 11 23:42:42 running-upgrade-031000 dockerd[4043]: time="2024-12-11T23:42:42.579990243Z" level=warning msg="cleanup warnings time=\"2024-12-11T23:42:42Z\" level=info msg=\"starting signal loop\" namespace=moby pid=19492 runtime=io.containerd.runc.v2\n"
	Dec 11 23:42:42 running-upgrade-031000 dockerd[4043]: time="2024-12-11T23:42:42.630165947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 11 23:42:42 running-upgrade-031000 dockerd[4043]: time="2024-12-11T23:42:42.630201030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 11 23:42:42 running-upgrade-031000 dockerd[4043]: time="2024-12-11T23:42:42.630207363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 11 23:42:42 running-upgrade-031000 dockerd[4043]: time="2024-12-11T23:42:42.630265694Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/83a9c76b0a9f238c47ec886a7a34ef99b448c9e507da551ece558abdfb21bd23 pid=19512 runtime=io.containerd.runc.v2
	Dec 11 23:42:43 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:43Z" level=error msg="ContainerStats resp: {0x4000357f40 linux}"
	Dec 11 23:42:44 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:44Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 11 23:42:44 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:44Z" level=error msg="ContainerStats resp: {0x40008a3e80 linux}"
	Dec 11 23:42:44 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:44Z" level=error msg="ContainerStats resp: {0x40006869c0 linux}"
	Dec 11 23:42:44 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:44Z" level=error msg="ContainerStats resp: {0x4000686b40 linux}"
	Dec 11 23:42:44 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:44Z" level=error msg="ContainerStats resp: {0x4000994940 linux}"
	Dec 11 23:42:44 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:44Z" level=error msg="ContainerStats resp: {0x4000994fc0 linux}"
	Dec 11 23:42:44 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:44Z" level=error msg="ContainerStats resp: {0x4000686040 linux}"
	Dec 11 23:42:49 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:49Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 11 23:42:54 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:54Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 11 23:42:54 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:54Z" level=error msg="ContainerStats resp: {0x40008ad580 linux}"
	Dec 11 23:42:54 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:54Z" level=error msg="ContainerStats resp: {0x40008ad980 linux}"
	Dec 11 23:42:55 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:55Z" level=error msg="ContainerStats resp: {0x40000b7ec0 linux}"
	Dec 11 23:42:56 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:56Z" level=error msg="ContainerStats resp: {0x4000686580 linux}"
	Dec 11 23:42:56 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:56Z" level=error msg="ContainerStats resp: {0x4000686940 linux}"
	Dec 11 23:42:56 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:56Z" level=error msg="ContainerStats resp: {0x40008a3a00 linux}"
	Dec 11 23:42:56 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:56Z" level=error msg="ContainerStats resp: {0x4000686e80 linux}"
	Dec 11 23:42:56 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:56Z" level=error msg="ContainerStats resp: {0x4000357880 linux}"
	Dec 11 23:42:56 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:56Z" level=error msg="ContainerStats resp: {0x40006877c0 linux}"
	Dec 11 23:42:56 running-upgrade-031000 cri-dockerd[3768]: time="2024-12-11T23:42:56Z" level=error msg="ContainerStats resp: {0x4000687e00 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	83a9c76b0a9f2       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   2f3125b9af839
	4078e96d144ef       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   8863ec19ec6ae
	c8d8a1d9479a8       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   2f3125b9af839
	db28b2c64217f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   8863ec19ec6ae
	7cc45d9c1547c       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   2c97de0bf9b4b
	99becbb9ed95b       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   90f0a3e9ab2ec
	497502e201e21       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   711adc3a2dc6a
	a3fa0d7667931       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   628fa8d6149a2
	4b2190ed09b41       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   a6f644ce84e1f
	41ffdfa246181       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   86cee25063c97
	
	
	==> coredns [4078e96d144e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3990823418931763107.8457602623983941319. HINFO: read udp 10.244.0.3:59321->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3990823418931763107.8457602623983941319. HINFO: read udp 10.244.0.3:46121->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3990823418931763107.8457602623983941319. HINFO: read udp 10.244.0.3:40149->10.0.2.3:53: i/o timeout
	
	
	==> coredns [83a9c76b0a9f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9140777600322549935.3803358270898105299. HINFO: read udp 10.244.0.2:34361->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9140777600322549935.3803358270898105299. HINFO: read udp 10.244.0.2:48754->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9140777600322549935.3803358270898105299. HINFO: read udp 10.244.0.2:34271->10.0.2.3:53: i/o timeout
	
	
	==> coredns [c8d8a1d9479a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2816930522842913144.8071008870866284949. HINFO: read udp 10.244.0.2:41842->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2816930522842913144.8071008870866284949. HINFO: read udp 10.244.0.2:48081->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2816930522842913144.8071008870866284949. HINFO: read udp 10.244.0.2:56319->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2816930522842913144.8071008870866284949. HINFO: read udp 10.244.0.2:54401->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2816930522842913144.8071008870866284949. HINFO: read udp 10.244.0.2:51808->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2816930522842913144.8071008870866284949. HINFO: read udp 10.244.0.2:36874->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2816930522842913144.8071008870866284949. HINFO: read udp 10.244.0.2:46117->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2816930522842913144.8071008870866284949. HINFO: read udp 10.244.0.2:54705->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2816930522842913144.8071008870866284949. HINFO: read udp 10.244.0.2:59650->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2816930522842913144.8071008870866284949. HINFO: read udp 10.244.0.2:53977->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [db28b2c64217] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3018768977660146268.3220636325686623833. HINFO: read udp 10.244.0.3:52385->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3018768977660146268.3220636325686623833. HINFO: read udp 10.244.0.3:41647->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3018768977660146268.3220636325686623833. HINFO: read udp 10.244.0.3:60111->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3018768977660146268.3220636325686623833. HINFO: read udp 10.244.0.3:32837->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3018768977660146268.3220636325686623833. HINFO: read udp 10.244.0.3:56537->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3018768977660146268.3220636325686623833. HINFO: read udp 10.244.0.3:45707->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3018768977660146268.3220636325686623833. HINFO: read udp 10.244.0.3:40654->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3018768977660146268.3220636325686623833. HINFO: read udp 10.244.0.3:35603->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3018768977660146268.3220636325686623833. HINFO: read udp 10.244.0.3:43626->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3018768977660146268.3220636325686623833. HINFO: read udp 10.244.0.3:54194->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-031000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-031000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=running-upgrade-031000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_11T15_38_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Dec 2024 23:38:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-031000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 Dec 2024 23:42:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 Dec 2024 23:38:41 +0000   Wed, 11 Dec 2024 23:38:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 Dec 2024 23:38:41 +0000   Wed, 11 Dec 2024 23:38:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 Dec 2024 23:38:41 +0000   Wed, 11 Dec 2024 23:38:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 Dec 2024 23:38:41 +0000   Wed, 11 Dec 2024 23:38:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-031000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148872Ki
	  pods:               110
	System Info:
	  Machine ID:                 672f076a2c9346dcbb906f4ceef4577f
	  System UUID:                672f076a2c9346dcbb906f4ceef4577f
	  Boot ID:                    77c79bfb-e17b-4b97-99a2-ba5ff84ae7f9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-bfnnw                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 coredns-6d4b75cb6d-wvzq6                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 etcd-running-upgrade-031000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-031000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-031000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-proxy-f6qtb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-031000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  NodeReady                4m16s  kubelet          Node running-upgrade-031000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m16s  kubelet          Node running-upgrade-031000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s  kubelet          Node running-upgrade-031000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s  kubelet          Node running-upgrade-031000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-031000 event: Registered Node running-upgrade-031000 in Controller
	
	
	==> dmesg <==
	[  +0.180868] systemd-fstab-generator[867]: Ignoring "noauto" for root device
	[  +0.087615] systemd-fstab-generator[878]: Ignoring "noauto" for root device
	[  +0.082590] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +1.224966] systemd-fstab-generator[1039]: Ignoring "noauto" for root device
	[  +0.079065] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +2.170596] systemd-fstab-generator[1276]: Ignoring "noauto" for root device
	[  +9.633820] systemd-fstab-generator[1922]: Ignoring "noauto" for root device
	[Dec11 23:34] kauditd_printk_skb: 86 callbacks suppressed
	[  +0.269534] systemd-fstab-generator[2416]: Ignoring "noauto" for root device
	[  +0.209243] systemd-fstab-generator[2564]: Ignoring "noauto" for root device
	[  +0.109844] systemd-fstab-generator[2575]: Ignoring "noauto" for root device
	[  +0.108327] systemd-fstab-generator[2588]: Ignoring "noauto" for root device
	[  +5.084812] kauditd_printk_skb: 10 callbacks suppressed
	[ +11.357269] systemd-fstab-generator[3725]: Ignoring "noauto" for root device
	[  +0.095665] systemd-fstab-generator[3736]: Ignoring "noauto" for root device
	[  +0.093632] systemd-fstab-generator[3747]: Ignoring "noauto" for root device
	[  +0.104402] systemd-fstab-generator[3761]: Ignoring "noauto" for root device
	[  +2.418264] systemd-fstab-generator[4029]: Ignoring "noauto" for root device
	[  +3.529125] systemd-fstab-generator[4404]: Ignoring "noauto" for root device
	[  +1.222699] kauditd_printk_skb: 80 callbacks suppressed
	[  +0.051415] systemd-fstab-generator[4812]: Ignoring "noauto" for root device
	[  +5.286289] kauditd_printk_skb: 1 callbacks suppressed
	[Dec11 23:38] systemd-fstab-generator[12531]: Ignoring "noauto" for root device
	[  +5.126391] systemd-fstab-generator[13121]: Ignoring "noauto" for root device
	[  +0.474925] systemd-fstab-generator[13253]: Ignoring "noauto" for root device
	
	
	==> etcd [4b2190ed09b4] <==
	{"level":"info","ts":"2024-12-11T23:38:36.733Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-12-11T23:38:36.733Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-12-11T23:38:36.736Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-11T23:38:36.736Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-11T23:38:36.736Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-11T23:38:36.736Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-11T23:38:36.736Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-11T23:38:36.931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-11T23:38:36.931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-11T23:38:36.931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-12-11T23:38:36.931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-12-11T23:38:36.931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-11T23:38:36.931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-12-11T23:38:36.931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-11T23:38:36.932Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-11T23:38:36.937Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-11T23:38:36.937Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-11T23:38:36.937Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-11T23:38:36.937Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-031000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-11T23:38:36.937Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-11T23:38:36.938Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-11T23:38:36.938Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-11T23:38:36.939Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-12-11T23:38:36.939Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-11T23:38:36.939Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:42:57 up 9 min,  0 users,  load average: 0.47, 0.34, 0.19
	Linux running-upgrade-031000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a3fa0d766793] <==
	I1211 23:38:38.569412       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1211 23:38:38.585050       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1211 23:38:38.587136       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1211 23:38:38.587178       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1211 23:38:38.587354       1 cache.go:39] Caches are synced for autoregister controller
	I1211 23:38:38.588051       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1211 23:38:38.613155       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1211 23:38:39.316855       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1211 23:38:39.490922       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1211 23:38:39.493006       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1211 23:38:39.493017       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1211 23:38:39.634134       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1211 23:38:39.644280       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1211 23:38:39.746367       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1211 23:38:39.748897       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1211 23:38:39.749331       1 controller.go:611] quota admission added evaluator for: endpoints
	I1211 23:38:39.750774       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1211 23:38:40.628624       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1211 23:38:40.921032       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1211 23:38:40.925369       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1211 23:38:40.941491       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1211 23:38:40.974803       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1211 23:38:53.787378       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1211 23:38:54.186513       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1211 23:38:55.007268       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [41ffdfa24618] <==
	I1211 23:38:53.489480       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1211 23:38:53.492102       1 range_allocator.go:374] Set node running-upgrade-031000 PodCIDR to [10.244.0.0/24]
	I1211 23:38:53.495270       1 shared_informer.go:262] Caches are synced for cronjob
	I1211 23:38:53.507683       1 shared_informer.go:262] Caches are synced for persistent volume
	I1211 23:38:53.527934       1 shared_informer.go:262] Caches are synced for stateful set
	I1211 23:38:53.534709       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1211 23:38:53.535458       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1211 23:38:53.535502       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1211 23:38:53.535535       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1211 23:38:53.535458       1 shared_informer.go:262] Caches are synced for expand
	I1211 23:38:53.553428       1 shared_informer.go:262] Caches are synced for PVC protection
	I1211 23:38:53.554518       1 shared_informer.go:262] Caches are synced for ephemeral
	I1211 23:38:53.585574       1 shared_informer.go:262] Caches are synced for attach detach
	I1211 23:38:53.592574       1 shared_informer.go:262] Caches are synced for disruption
	I1211 23:38:53.592612       1 disruption.go:371] Sending events to api server.
	I1211 23:38:53.685201       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1211 23:38:53.691307       1 shared_informer.go:262] Caches are synced for resource quota
	I1211 23:38:53.691310       1 shared_informer.go:262] Caches are synced for resource quota
	I1211 23:38:53.790600       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-f6qtb"
	I1211 23:38:54.105925       1 shared_informer.go:262] Caches are synced for garbage collector
	I1211 23:38:54.135721       1 shared_informer.go:262] Caches are synced for garbage collector
	I1211 23:38:54.135732       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1211 23:38:54.187714       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1211 23:38:54.489612       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-wvzq6"
	I1211 23:38:54.497372       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-bfnnw"
	
	
	==> kube-proxy [99becbb9ed95] <==
	I1211 23:38:54.934862       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1211 23:38:54.934899       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1211 23:38:54.934911       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1211 23:38:54.979312       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1211 23:38:54.979320       1 server_others.go:206] "Using iptables Proxier"
	I1211 23:38:54.979333       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1211 23:38:54.979424       1 server.go:661] "Version info" version="v1.24.1"
	I1211 23:38:54.979427       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:38:54.980632       1 config.go:317] "Starting service config controller"
	I1211 23:38:54.980645       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1211 23:38:54.980658       1 config.go:226] "Starting endpoint slice config controller"
	I1211 23:38:54.980664       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1211 23:38:55.005944       1 config.go:444] "Starting node config controller"
	I1211 23:38:55.006004       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1211 23:38:55.080780       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1211 23:38:55.080805       1 shared_informer.go:262] Caches are synced for service config
	I1211 23:38:55.106343       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [497502e201e2] <==
	W1211 23:38:38.536052       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1211 23:38:38.536238       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1211 23:38:38.536473       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1211 23:38:38.536496       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1211 23:38:38.536540       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1211 23:38:38.536566       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1211 23:38:38.536676       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1211 23:38:38.536713       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1211 23:38:38.536747       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1211 23:38:38.536780       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1211 23:38:39.362494       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1211 23:38:39.362523       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1211 23:38:39.421816       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1211 23:38:39.421859       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1211 23:38:39.493181       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1211 23:38:39.493240       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1211 23:38:39.530366       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1211 23:38:39.530458       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1211 23:38:39.554568       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1211 23:38:39.554677       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1211 23:38:39.558236       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1211 23:38:39.558704       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1211 23:38:39.578521       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1211 23:38:39.578539       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1211 23:38:41.329244       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-12-11 23:33:26 UTC, ends at Wed 2024-12-11 23:42:58 UTC. --
	Dec 11 23:38:53 running-upgrade-031000 kubelet[13128]: I1211 23:38:53.561285   13128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b0e10175-d429-41db-87d8-3b0522372feb-tmp\") pod \"storage-provisioner\" (UID: \"b0e10175-d429-41db-87d8-3b0522372feb\") " pod="kube-system/storage-provisioner"
	Dec 11 23:38:53 running-upgrade-031000 kubelet[13128]: I1211 23:38:53.561300   13128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gvch\" (UniqueName: \"kubernetes.io/projected/b0e10175-d429-41db-87d8-3b0522372feb-kube-api-access-7gvch\") pod \"storage-provisioner\" (UID: \"b0e10175-d429-41db-87d8-3b0522372feb\") " pod="kube-system/storage-provisioner"
	Dec 11 23:38:53 running-upgrade-031000 kubelet[13128]: I1211 23:38:53.561517   13128 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 11 23:38:53 running-upgrade-031000 kubelet[13128]: E1211 23:38:53.667097   13128 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 11 23:38:53 running-upgrade-031000 kubelet[13128]: E1211 23:38:53.667119   13128 projected.go:192] Error preparing data for projected volume kube-api-access-7gvch for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Dec 11 23:38:53 running-upgrade-031000 kubelet[13128]: E1211 23:38:53.667151   13128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b0e10175-d429-41db-87d8-3b0522372feb-kube-api-access-7gvch podName:b0e10175-d429-41db-87d8-3b0522372feb nodeName:}" failed. No retries permitted until 2024-12-11 23:38:54.167140319 +0000 UTC m=+13.257934998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7gvch" (UniqueName: "kubernetes.io/projected/b0e10175-d429-41db-87d8-3b0522372feb-kube-api-access-7gvch") pod "storage-provisioner" (UID: "b0e10175-d429-41db-87d8-3b0522372feb") : configmap "kube-root-ca.crt" not found
	Dec 11 23:38:53 running-upgrade-031000 kubelet[13128]: I1211 23:38:53.793588   13128 topology_manager.go:200] "Topology Admit Handler"
	Dec 11 23:38:53 running-upgrade-031000 kubelet[13128]: I1211 23:38:53.969844   13128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b711d34c-92e8-4321-af32-977766293429-xtables-lock\") pod \"kube-proxy-f6qtb\" (UID: \"b711d34c-92e8-4321-af32-977766293429\") " pod="kube-system/kube-proxy-f6qtb"
	Dec 11 23:38:53 running-upgrade-031000 kubelet[13128]: I1211 23:38:53.969874   13128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b711d34c-92e8-4321-af32-977766293429-kube-proxy\") pod \"kube-proxy-f6qtb\" (UID: \"b711d34c-92e8-4321-af32-977766293429\") " pod="kube-system/kube-proxy-f6qtb"
	Dec 11 23:38:53 running-upgrade-031000 kubelet[13128]: I1211 23:38:53.969886   13128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbvxg\" (UniqueName: \"kubernetes.io/projected/b711d34c-92e8-4321-af32-977766293429-kube-api-access-sbvxg\") pod \"kube-proxy-f6qtb\" (UID: \"b711d34c-92e8-4321-af32-977766293429\") " pod="kube-system/kube-proxy-f6qtb"
	Dec 11 23:38:53 running-upgrade-031000 kubelet[13128]: I1211 23:38:53.969896   13128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b711d34c-92e8-4321-af32-977766293429-lib-modules\") pod \"kube-proxy-f6qtb\" (UID: \"b711d34c-92e8-4321-af32-977766293429\") " pod="kube-system/kube-proxy-f6qtb"
	Dec 11 23:38:54 running-upgrade-031000 kubelet[13128]: E1211 23:38:54.074899   13128 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 11 23:38:54 running-upgrade-031000 kubelet[13128]: E1211 23:38:54.074916   13128 projected.go:192] Error preparing data for projected volume kube-api-access-sbvxg for pod kube-system/kube-proxy-f6qtb: configmap "kube-root-ca.crt" not found
	Dec 11 23:38:54 running-upgrade-031000 kubelet[13128]: E1211 23:38:54.074944   13128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b711d34c-92e8-4321-af32-977766293429-kube-api-access-sbvxg podName:b711d34c-92e8-4321-af32-977766293429 nodeName:}" failed. No retries permitted until 2024-12-11 23:38:54.574933545 +0000 UTC m=+13.665728182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sbvxg" (UniqueName: "kubernetes.io/projected/b711d34c-92e8-4321-af32-977766293429-kube-api-access-sbvxg") pod "kube-proxy-f6qtb" (UID: "b711d34c-92e8-4321-af32-977766293429") : configmap "kube-root-ca.crt" not found
	Dec 11 23:38:54 running-upgrade-031000 kubelet[13128]: E1211 23:38:54.171247   13128 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 11 23:38:54 running-upgrade-031000 kubelet[13128]: E1211 23:38:54.171266   13128 projected.go:192] Error preparing data for projected volume kube-api-access-7gvch for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Dec 11 23:38:54 running-upgrade-031000 kubelet[13128]: E1211 23:38:54.171294   13128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/b0e10175-d429-41db-87d8-3b0522372feb-kube-api-access-7gvch podName:b0e10175-d429-41db-87d8-3b0522372feb nodeName:}" failed. No retries permitted until 2024-12-11 23:38:55.171285978 +0000 UTC m=+14.262080657 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7gvch" (UniqueName: "kubernetes.io/projected/b0e10175-d429-41db-87d8-3b0522372feb-kube-api-access-7gvch") pod "storage-provisioner" (UID: "b0e10175-d429-41db-87d8-3b0522372feb") : configmap "kube-root-ca.crt" not found
	Dec 11 23:38:54 running-upgrade-031000 kubelet[13128]: I1211 23:38:54.491836   13128 topology_manager.go:200] "Topology Admit Handler"
	Dec 11 23:38:54 running-upgrade-031000 kubelet[13128]: I1211 23:38:54.500509   13128 topology_manager.go:200] "Topology Admit Handler"
	Dec 11 23:38:54 running-upgrade-031000 kubelet[13128]: I1211 23:38:54.678382   13128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v2dh\" (UniqueName: \"kubernetes.io/projected/e9bd86f1-ddf6-419f-8e97-4ee637e56b65-kube-api-access-8v2dh\") pod \"coredns-6d4b75cb6d-bfnnw\" (UID: \"e9bd86f1-ddf6-419f-8e97-4ee637e56b65\") " pod="kube-system/coredns-6d4b75cb6d-bfnnw"
	Dec 11 23:38:54 running-upgrade-031000 kubelet[13128]: I1211 23:38:54.678405   13128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b5530dd-fa62-48d9-85f6-5596431b8f06-config-volume\") pod \"coredns-6d4b75cb6d-wvzq6\" (UID: \"3b5530dd-fa62-48d9-85f6-5596431b8f06\") " pod="kube-system/coredns-6d4b75cb6d-wvzq6"
	Dec 11 23:38:54 running-upgrade-031000 kubelet[13128]: I1211 23:38:54.678416   13128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh5vx\" (UniqueName: \"kubernetes.io/projected/3b5530dd-fa62-48d9-85f6-5596431b8f06-kube-api-access-fh5vx\") pod \"coredns-6d4b75cb6d-wvzq6\" (UID: \"3b5530dd-fa62-48d9-85f6-5596431b8f06\") " pod="kube-system/coredns-6d4b75cb6d-wvzq6"
	Dec 11 23:38:54 running-upgrade-031000 kubelet[13128]: I1211 23:38:54.678427   13128 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9bd86f1-ddf6-419f-8e97-4ee637e56b65-config-volume\") pod \"coredns-6d4b75cb6d-bfnnw\" (UID: \"e9bd86f1-ddf6-419f-8e97-4ee637e56b65\") " pod="kube-system/coredns-6d4b75cb6d-bfnnw"
	Dec 11 23:42:43 running-upgrade-031000 kubelet[13128]: I1211 23:42:43.102009   13128 scope.go:110] "RemoveContainer" containerID="ca88055a8d3943090ec62876704770b7b3d6efa1fb3eea0f9e4c230a22a6f239"
	Dec 11 23:42:43 running-upgrade-031000 kubelet[13128]: I1211 23:42:43.128687   13128 scope.go:110] "RemoveContainer" containerID="cccbdb12b2cf2f809e09cf58e6b6f78c4711a6f39aca1b0677ea52fc1be90b7a"
	
	
	==> storage-provisioner [7cc45d9c1547] <==
	I1211 23:38:55.460834       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1211 23:38:55.466608       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1211 23:38:55.466639       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1211 23:38:55.471211       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1211 23:38:55.471330       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6f62ba60-2eec-4aeb-a1b6-7a1b43f71341", APIVersion:"v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-031000_90094f9a-7177-4bc4-b95c-a7a9b5e8aecb became leader
	I1211 23:38:55.471356       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-031000_90094f9a-7177-4bc4-b95c-a7a9b5e8aecb!
	I1211 23:38:55.572046       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-031000_90094f9a-7177-4bc4-b95c-a7a9b5e8aecb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-031000 -n running-upgrade-031000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-031000 -n running-upgrade-031000: exit status 2 (15.673130833s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-031000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-031000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-031000
--- FAIL: TestRunningBinaryUpgrade (622.02s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-476000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-476000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.116277792s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-476000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-476000" primary control-plane node in "kubernetes-upgrade-476000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-476000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:32:33.068482    9012 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:32:33.068763    9012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:32:33.068766    9012 out.go:358] Setting ErrFile to fd 2...
	I1211 15:32:33.068769    9012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:32:33.068883    9012 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:32:33.070284    9012 out.go:352] Setting JSON to false
	I1211 15:32:33.089508    9012 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5523,"bootTime":1733954430,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:32:33.089583    9012 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:32:33.094481    9012 out.go:177] * [kubernetes-upgrade-476000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:32:33.108501    9012 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:32:33.108542    9012 notify.go:220] Checking for updates...
	I1211 15:32:33.116388    9012 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:32:33.121421    9012 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:32:33.124440    9012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:32:33.127496    9012 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:32:33.130454    9012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:32:33.133871    9012 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:32:33.133977    9012 config.go:182] Loaded profile config "offline-docker-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:32:33.134025    9012 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:32:33.138428    9012 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:32:33.145508    9012 start.go:297] selected driver: qemu2
	I1211 15:32:33.145513    9012 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:32:33.145528    9012 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:32:33.148291    9012 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:32:33.151412    9012 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:32:33.154506    9012 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 15:32:33.154530    9012 cni.go:84] Creating CNI manager for ""
	I1211 15:32:33.154556    9012 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1211 15:32:33.154602    9012 start.go:340] cluster config:
	{Name:kubernetes-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:32:33.159652    9012 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:32:33.167416    9012 out.go:177] * Starting "kubernetes-upgrade-476000" primary control-plane node in "kubernetes-upgrade-476000" cluster
	I1211 15:32:33.171468    9012 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1211 15:32:33.171489    9012 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1211 15:32:33.171497    9012 cache.go:56] Caching tarball of preloaded images
	I1211 15:32:33.171583    9012 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:32:33.171589    9012 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1211 15:32:33.171662    9012 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/kubernetes-upgrade-476000/config.json ...
	I1211 15:32:33.171674    9012 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/kubernetes-upgrade-476000/config.json: {Name:mke7778b59f0d651570c774e73de4afc019dda44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:32:33.172220    9012 start.go:360] acquireMachinesLock for kubernetes-upgrade-476000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:32:33.172275    9012 start.go:364] duration metric: took 46.334µs to acquireMachinesLock for "kubernetes-upgrade-476000"
	I1211 15:32:33.172287    9012 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:32:33.172317    9012 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:32:33.181391    9012 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:32:33.198947    9012 start.go:159] libmachine.API.Create for "kubernetes-upgrade-476000" (driver="qemu2")
	I1211 15:32:33.198990    9012 client.go:168] LocalClient.Create starting
	I1211 15:32:33.199072    9012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:32:33.199114    9012 main.go:141] libmachine: Decoding PEM data...
	I1211 15:32:33.199128    9012 main.go:141] libmachine: Parsing certificate...
	I1211 15:32:33.199167    9012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:32:33.199198    9012 main.go:141] libmachine: Decoding PEM data...
	I1211 15:32:33.199207    9012 main.go:141] libmachine: Parsing certificate...
	I1211 15:32:33.199602    9012 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:32:33.362050    9012 main.go:141] libmachine: Creating SSH key...
	I1211 15:32:33.620110    9012 main.go:141] libmachine: Creating Disk image...
	I1211 15:32:33.620120    9012 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:32:33.620529    9012 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1211 15:32:33.631264    9012 main.go:141] libmachine: STDOUT: 
	I1211 15:32:33.631290    9012 main.go:141] libmachine: STDERR: 
	I1211 15:32:33.631345    9012 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2 +20000M
	I1211 15:32:33.639951    9012 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:32:33.639967    9012 main.go:141] libmachine: STDERR: 
	I1211 15:32:33.639977    9012 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1211 15:32:33.639986    9012 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:32:33.640002    9012 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:32:33.640038    9012 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:e9:59:da:aa:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1211 15:32:33.641928    9012 main.go:141] libmachine: STDOUT: 
	I1211 15:32:33.641943    9012 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:32:33.641962    9012 client.go:171] duration metric: took 442.977ms to LocalClient.Create
	I1211 15:32:35.644067    9012 start.go:128] duration metric: took 2.471808041s to createHost
	I1211 15:32:35.644139    9012 start.go:83] releasing machines lock for "kubernetes-upgrade-476000", held for 2.471929208s
	W1211 15:32:35.644191    9012 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:32:35.662178    9012 out.go:177] * Deleting "kubernetes-upgrade-476000" in qemu2 ...
	W1211 15:32:35.695690    9012 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:32:35.695717    9012 start.go:729] Will try again in 5 seconds ...
	I1211 15:32:40.697735    9012 start.go:360] acquireMachinesLock for kubernetes-upgrade-476000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:32:40.697896    9012 start.go:364] duration metric: took 132.209µs to acquireMachinesLock for "kubernetes-upgrade-476000"
	I1211 15:32:40.697944    9012 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:32:40.698012    9012 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:32:40.707624    9012 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:32:40.731162    9012 start.go:159] libmachine.API.Create for "kubernetes-upgrade-476000" (driver="qemu2")
	I1211 15:32:40.731201    9012 client.go:168] LocalClient.Create starting
	I1211 15:32:40.731290    9012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:32:40.731344    9012 main.go:141] libmachine: Decoding PEM data...
	I1211 15:32:40.731356    9012 main.go:141] libmachine: Parsing certificate...
	I1211 15:32:40.731407    9012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:32:40.731445    9012 main.go:141] libmachine: Decoding PEM data...
	I1211 15:32:40.731458    9012 main.go:141] libmachine: Parsing certificate...
	I1211 15:32:40.731849    9012 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:32:40.894691    9012 main.go:141] libmachine: Creating SSH key...
	I1211 15:32:41.087870    9012 main.go:141] libmachine: Creating Disk image...
	I1211 15:32:41.087880    9012 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:32:41.088121    9012 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1211 15:32:41.100131    9012 main.go:141] libmachine: STDOUT: 
	I1211 15:32:41.100153    9012 main.go:141] libmachine: STDERR: 
	I1211 15:32:41.100227    9012 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2 +20000M
	I1211 15:32:41.109100    9012 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:32:41.109114    9012 main.go:141] libmachine: STDERR: 
	I1211 15:32:41.109257    9012 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1211 15:32:41.109264    9012 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:32:41.109271    9012 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:32:41.109316    9012 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:84:f1:4a:c0:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1211 15:32:41.111270    9012 main.go:141] libmachine: STDOUT: 
	I1211 15:32:41.111285    9012 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:32:41.111296    9012 client.go:171] duration metric: took 380.101333ms to LocalClient.Create
	I1211 15:32:43.113395    9012 start.go:128] duration metric: took 2.415423958s to createHost
	I1211 15:32:43.113465    9012 start.go:83] releasing machines lock for "kubernetes-upgrade-476000", held for 2.415629s
	W1211 15:32:43.113877    9012 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-476000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-476000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:32:43.122613    9012 out.go:201] 
	W1211 15:32:43.126646    9012 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:32:43.126674    9012 out.go:270] * 
	* 
	W1211 15:32:43.129397    9012 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:32:43.137551    9012 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-476000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-476000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-476000: (3.537206792s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-476000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-476000 status --format={{.Host}}: exit status 7 (68.63325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-476000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-476000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.223455791s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-476000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-476000" primary control-plane node in "kubernetes-upgrade-476000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-476000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-476000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:32:46.792683    9063 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:32:46.792847    9063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:32:46.792850    9063 out.go:358] Setting ErrFile to fd 2...
	I1211 15:32:46.792853    9063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:32:46.792978    9063 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:32:46.794044    9063 out.go:352] Setting JSON to false
	I1211 15:32:46.811993    9063 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5536,"bootTime":1733954430,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:32:46.812065    9063 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:32:46.817666    9063 out.go:177] * [kubernetes-upgrade-476000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:32:46.826674    9063 notify.go:220] Checking for updates...
	I1211 15:32:46.831574    9063 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:32:46.839388    9063 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:32:46.847566    9063 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:32:46.853617    9063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:32:46.860607    9063 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:32:46.867604    9063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:32:46.871899    9063 config.go:182] Loaded profile config "kubernetes-upgrade-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1211 15:32:46.872178    9063 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:32:46.876615    9063 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:32:46.883604    9063 start.go:297] selected driver: qemu2
	I1211 15:32:46.883609    9063 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:32:46.883654    9063 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:32:46.886255    9063 cni.go:84] Creating CNI manager for ""
	I1211 15:32:46.886289    9063 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:32:46.886315    9063 start.go:340] cluster config:
	{Name:kubernetes-upgrade-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-476000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:32:46.890736    9063 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:32:46.897598    9063 out.go:177] * Starting "kubernetes-upgrade-476000" primary control-plane node in "kubernetes-upgrade-476000" cluster
	I1211 15:32:46.901595    9063 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:32:46.901613    9063 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:32:46.901622    9063 cache.go:56] Caching tarball of preloaded images
	I1211 15:32:46.901695    9063 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:32:46.901700    9063 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:32:46.901757    9063 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/kubernetes-upgrade-476000/config.json ...
	I1211 15:32:46.902105    9063 start.go:360] acquireMachinesLock for kubernetes-upgrade-476000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:32:46.902135    9063 start.go:364] duration metric: took 23.75µs to acquireMachinesLock for "kubernetes-upgrade-476000"
	I1211 15:32:46.902143    9063 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:32:46.902147    9063 fix.go:54] fixHost starting: 
	I1211 15:32:46.902262    9063 fix.go:112] recreateIfNeeded on kubernetes-upgrade-476000: state=Stopped err=<nil>
	W1211 15:32:46.902271    9063 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:32:46.909562    9063 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-476000" ...
	I1211 15:32:46.913687    9063 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:32:46.913731    9063 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:84:f1:4a:c0:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1211 15:32:46.915867    9063 main.go:141] libmachine: STDOUT: 
	I1211 15:32:46.915887    9063 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:32:46.915917    9063 fix.go:56] duration metric: took 13.766417ms for fixHost
	I1211 15:32:46.915921    9063 start.go:83] releasing machines lock for "kubernetes-upgrade-476000", held for 13.782417ms
	W1211 15:32:46.915926    9063 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:32:46.915965    9063 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:32:46.915969    9063 start.go:729] Will try again in 5 seconds ...
	I1211 15:32:51.916602    9063 start.go:360] acquireMachinesLock for kubernetes-upgrade-476000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:32:51.917041    9063 start.go:364] duration metric: took 320.708µs to acquireMachinesLock for "kubernetes-upgrade-476000"
	I1211 15:32:51.917771    9063 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:32:51.917797    9063 fix.go:54] fixHost starting: 
	I1211 15:32:51.918424    9063 fix.go:112] recreateIfNeeded on kubernetes-upgrade-476000: state=Stopped err=<nil>
	W1211 15:32:51.918451    9063 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:32:51.928274    9063 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-476000" ...
	I1211 15:32:51.936229    9063 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:32:51.936550    9063 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:84:f1:4a:c0:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubernetes-upgrade-476000/disk.qcow2
	I1211 15:32:51.946527    9063 main.go:141] libmachine: STDOUT: 
	I1211 15:32:51.946590    9063 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:32:51.946673    9063 fix.go:56] duration metric: took 28.877625ms for fixHost
	I1211 15:32:51.946693    9063 start.go:83] releasing machines lock for "kubernetes-upgrade-476000", held for 29.630417ms
	W1211 15:32:51.946954    9063 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-476000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-476000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:32:51.955057    9063 out.go:201] 
	W1211 15:32:51.958228    9063 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:32:51.958256    9063 out.go:270] * 
	* 
	W1211 15:32:51.960150    9063 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:32:51.970290    9063 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-476000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-476000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-476000 version --output=json: exit status 1 (65.588833ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-476000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-12-11 15:32:52.049275 -0800 PST m=+681.908199251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-476000 -n kubernetes-upgrade-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-476000 -n kubernetes-upgrade-476000: exit status 7 (38.824709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-476000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-476000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-476000
--- FAIL: TestKubernetesUpgrade (19.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (583.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2239321148 start -p stopped-upgrade-684000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2239321148 start -p stopped-upgrade-684000 --memory=2200 --vm-driver=qemu2 : (51.0592975s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2239321148 -p stopped-upgrade-684000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2239321148 -p stopped-upgrade-684000 stop: (12.112347709s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-684000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-684000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.47741025s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-684000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-684000" primary control-plane node in "stopped-upgrade-684000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-684000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:33:45.294041    9116 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:33:45.294609    9116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:33:45.294613    9116 out.go:358] Setting ErrFile to fd 2...
	I1211 15:33:45.294616    9116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:33:45.294779    9116 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:33:45.296071    9116 out.go:352] Setting JSON to false
	I1211 15:33:45.316880    9116 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5595,"bootTime":1733954430,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:33:45.316965    9116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:33:45.322543    9116 out.go:177] * [stopped-upgrade-684000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:33:45.330927    9116 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:33:45.330971    9116 notify.go:220] Checking for updates...
	I1211 15:33:45.339497    9116 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:33:45.342514    9116 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:33:45.346516    9116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:33:45.349457    9116 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:33:45.352503    9116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:33:45.355920    9116 config.go:182] Loaded profile config "stopped-upgrade-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1211 15:33:45.359492    9116 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1211 15:33:45.362805    9116 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:33:45.367390    9116 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:33:45.374457    9116 start.go:297] selected driver: qemu2
	I1211 15:33:45.374462    9116 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61417 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1211 15:33:45.374539    9116 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:33:45.377669    9116 cni.go:84] Creating CNI manager for ""
	I1211 15:33:45.377708    9116 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:33:45.377889    9116 start.go:340] cluster config:
	{Name:stopped-upgrade-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61417 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1211 15:33:45.378118    9116 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:33:45.386414    9116 out.go:177] * Starting "stopped-upgrade-684000" primary control-plane node in "stopped-upgrade-684000" cluster
	I1211 15:33:45.390334    9116 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1211 15:33:45.390353    9116 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1211 15:33:45.390358    9116 cache.go:56] Caching tarball of preloaded images
	I1211 15:33:45.390421    9116 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:33:45.390429    9116 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1211 15:33:45.390480    9116 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/config.json ...
	I1211 15:33:45.391009    9116 start.go:360] acquireMachinesLock for stopped-upgrade-684000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:33:45.391068    9116 start.go:364] duration metric: took 53.625µs to acquireMachinesLock for "stopped-upgrade-684000"
	I1211 15:33:45.391076    9116 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:33:45.391080    9116 fix.go:54] fixHost starting: 
	I1211 15:33:45.391189    9116 fix.go:112] recreateIfNeeded on stopped-upgrade-684000: state=Stopped err=<nil>
	W1211 15:33:45.391197    9116 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:33:45.395461    9116 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-684000" ...
	I1211 15:33:45.402669    9116 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:33:45.402755    9116 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.2.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/qemu.pid -nic user,model=virtio,hostfwd=tcp::61382-:22,hostfwd=tcp::61383-:2376,hostname=stopped-upgrade-684000 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/disk.qcow2
	I1211 15:33:45.450512    9116 main.go:141] libmachine: STDOUT: 
	I1211 15:33:45.450534    9116 main.go:141] libmachine: STDERR: 
	I1211 15:33:45.450541    9116 main.go:141] libmachine: Waiting for VM to start (ssh -p 61382 docker@127.0.0.1)...
	I1211 15:34:04.393911    9116 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/config.json ...
	I1211 15:34:04.394163    9116 machine.go:93] provisionDockerMachine start ...
	I1211 15:34:04.394250    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:04.394400    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:04.394404    9116 main.go:141] libmachine: About to run SSH command:
	hostname
	I1211 15:34:04.458783    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1211 15:34:04.458812    9116 buildroot.go:166] provisioning hostname "stopped-upgrade-684000"
	I1211 15:34:04.458879    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:04.458998    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:04.459005    9116 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-684000 && echo "stopped-upgrade-684000" | sudo tee /etc/hostname
	I1211 15:34:04.527279    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-684000
	
	I1211 15:34:04.527350    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:04.527469    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:04.527477    9116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-684000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-684000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-684000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 15:34:04.593834    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 15:34:04.593850    9116 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20083-6627/.minikube CaCertPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20083-6627/.minikube}
	I1211 15:34:04.593870    9116 buildroot.go:174] setting up certificates
	I1211 15:34:04.593875    9116 provision.go:84] configureAuth start
	I1211 15:34:04.593902    9116 provision.go:143] copyHostCerts
	I1211 15:34:04.593996    9116 exec_runner.go:144] found /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.pem, removing ...
	I1211 15:34:04.594278    9116 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.pem
	I1211 15:34:04.594372    9116 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.pem (1078 bytes)
	I1211 15:34:04.594570    9116 exec_runner.go:144] found /Users/jenkins/minikube-integration/20083-6627/.minikube/cert.pem, removing ...
	I1211 15:34:04.594576    9116 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20083-6627/.minikube/cert.pem
	I1211 15:34:04.594632    9116 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20083-6627/.minikube/cert.pem (1123 bytes)
	I1211 15:34:04.594757    9116 exec_runner.go:144] found /Users/jenkins/minikube-integration/20083-6627/.minikube/key.pem, removing ...
	I1211 15:34:04.594768    9116 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20083-6627/.minikube/key.pem
	I1211 15:34:04.594813    9116 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20083-6627/.minikube/key.pem (1675 bytes)
	I1211 15:34:04.594908    9116 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-684000 san=[127.0.0.1 localhost minikube stopped-upgrade-684000]
	I1211 15:34:04.659090    9116 provision.go:177] copyRemoteCerts
	I1211 15:34:04.659202    9116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 15:34:04.659211    9116 sshutil.go:53] new ssh client: &{IP:localhost Port:61382 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/id_rsa Username:docker}
	I1211 15:34:04.694531    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1211 15:34:04.701259    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1211 15:34:04.708111    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 15:34:04.714585    9116 provision.go:87] duration metric: took 120.690916ms to configureAuth
	I1211 15:34:04.714593    9116 buildroot.go:189] setting minikube options for container-runtime
	I1211 15:34:04.714694    9116 config.go:182] Loaded profile config "stopped-upgrade-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1211 15:34:04.714747    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:04.714834    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:04.714839    9116 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1211 15:34:04.778839    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1211 15:34:04.778850    9116 buildroot.go:70] root file system type: tmpfs
	I1211 15:34:04.778911    9116 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1211 15:34:04.778972    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:04.779081    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:04.779115    9116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1211 15:34:04.846643    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1211 15:34:04.846714    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:04.846835    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:04.846845    9116 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1211 15:34:05.188631    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1211 15:34:05.188644    9116 machine.go:96] duration metric: took 794.499666ms to provisionDockerMachine
	I1211 15:34:05.188652    9116 start.go:293] postStartSetup for "stopped-upgrade-684000" (driver="qemu2")
	I1211 15:34:05.188660    9116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 15:34:05.188744    9116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 15:34:05.188753    9116 sshutil.go:53] new ssh client: &{IP:localhost Port:61382 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/id_rsa Username:docker}
	I1211 15:34:05.224182    9116 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 15:34:05.225516    9116 info.go:137] Remote host: Buildroot 2021.02.12
	I1211 15:34:05.225524    9116 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20083-6627/.minikube/addons for local assets ...
	I1211 15:34:05.225594    9116 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20083-6627/.minikube/files for local assets ...
	I1211 15:34:05.225685    9116 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem -> 71352.pem in /etc/ssl/certs
	I1211 15:34:05.225790    9116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1211 15:34:05.228972    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem --> /etc/ssl/certs/71352.pem (1708 bytes)
	I1211 15:34:05.236754    9116 start.go:296] duration metric: took 48.095834ms for postStartSetup
	I1211 15:34:05.236773    9116 fix.go:56] duration metric: took 19.846305417s for fixHost
	I1211 15:34:05.236831    9116 main.go:141] libmachine: Using SSH client type: native
	I1211 15:34:05.236938    9116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104afb1b0] 0x104afd9f0 <nil>  [] 0s} localhost 61382 <nil> <nil>}
	I1211 15:34:05.236942    9116 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1211 15:34:05.298986    9116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733960045.172601879
	
	I1211 15:34:05.298996    9116 fix.go:216] guest clock: 1733960045.172601879
	I1211 15:34:05.298999    9116 fix.go:229] Guest: 2024-12-11 15:34:05.172601879 -0800 PST Remote: 2024-12-11 15:34:05.236775 -0800 PST m=+20.058109126 (delta=-64.173121ms)
	I1211 15:34:05.299010    9116 fix.go:200] guest clock delta is within tolerance: -64.173121ms
	I1211 15:34:05.299012    9116 start.go:83] releasing machines lock for "stopped-upgrade-684000", held for 19.908554167s
	I1211 15:34:05.299092    9116 ssh_runner.go:195] Run: cat /version.json
	I1211 15:34:05.299103    9116 sshutil.go:53] new ssh client: &{IP:localhost Port:61382 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/id_rsa Username:docker}
	I1211 15:34:05.299092    9116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 15:34:05.299839    9116 sshutil.go:53] new ssh client: &{IP:localhost Port:61382 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/id_rsa Username:docker}
	W1211 15:34:05.331524    9116 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1211 15:34:05.331583    9116 ssh_runner.go:195] Run: systemctl --version
	I1211 15:34:05.377843    9116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 15:34:05.380028    9116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 15:34:05.380089    9116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1211 15:34:05.382930    9116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1211 15:34:05.388103    9116 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 15:34:05.388114    9116 start.go:495] detecting cgroup driver to use...
	I1211 15:34:05.388230    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 15:34:05.395344    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1211 15:34:05.398364    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1211 15:34:05.401715    9116 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1211 15:34:05.401754    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1211 15:34:05.405326    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1211 15:34:05.408451    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1211 15:34:05.411170    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1211 15:34:05.414133    9116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 15:34:05.417366    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1211 15:34:05.420607    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1211 15:34:05.423845    9116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1211 15:34:05.426919    9116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 15:34:05.430136    9116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 15:34:05.433355    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:05.497501    9116 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1211 15:34:05.508490    9116 start.go:495] detecting cgroup driver to use...
	I1211 15:34:05.508614    9116 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1211 15:34:05.514157    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 15:34:05.518904    9116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 15:34:05.525580    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 15:34:05.530511    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1211 15:34:05.535081    9116 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1211 15:34:05.593741    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1211 15:34:05.598928    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 15:34:05.604463    9116 ssh_runner.go:195] Run: which cri-dockerd
	I1211 15:34:05.605676    9116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1211 15:34:05.608946    9116 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1211 15:34:05.614074    9116 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1211 15:34:05.682149    9116 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1211 15:34:05.749474    9116 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1211 15:34:05.749534    9116 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1211 15:34:05.755375    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:05.818729    9116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1211 15:34:06.961526    9116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.14279425s)
	I1211 15:34:06.961672    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1211 15:34:06.968536    9116 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1211 15:34:06.976149    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1211 15:34:06.982510    9116 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1211 15:34:07.049256    9116 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1211 15:34:07.114784    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:07.176383    9116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1211 15:34:07.182504    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1211 15:34:07.187509    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:07.246501    9116 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1211 15:34:07.285867    9116 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1211 15:34:07.285975    9116 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1211 15:34:07.287899    9116 start.go:563] Will wait 60s for crictl version
	I1211 15:34:07.287939    9116 ssh_runner.go:195] Run: which crictl
	I1211 15:34:07.289299    9116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 15:34:07.304979    9116 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1211 15:34:07.305057    9116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1211 15:34:07.321518    9116 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1211 15:34:07.341362    9116 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1211 15:34:07.341518    9116 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1211 15:34:07.342765    9116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 15:34:07.346715    9116 kubeadm.go:883] updating cluster {Name:stopped-upgrade-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61417 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1211 15:34:07.346760    9116 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1211 15:34:07.346811    9116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1211 15:34:07.357021    9116 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1211 15:34:07.357030    9116 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1211 15:34:07.357089    9116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1211 15:34:07.360205    9116 ssh_runner.go:195] Run: which lz4
	I1211 15:34:07.361448    9116 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 15:34:07.362571    9116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 15:34:07.362581    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1211 15:34:08.341712    9116 docker.go:653] duration metric: took 980.334708ms to copy over tarball
	I1211 15:34:08.341787    9116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 15:34:09.523429    9116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.1816645s)
	I1211 15:34:09.523443    9116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 15:34:09.539029    9116 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1211 15:34:09.542288    9116 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1211 15:34:09.547447    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:09.612806    9116 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1211 15:34:11.203263    9116 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.590485292s)
	I1211 15:34:11.203364    9116 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1211 15:34:11.218754    9116 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1211 15:34:11.218764    9116 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1211 15:34:11.218769    9116 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1211 15:34:11.226436    9116 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1211 15:34:11.227742    9116 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:11.229395    9116 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:11.230585    9116 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:11.230636    9116 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1211 15:34:11.230740    9116 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:11.232733    9116 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:11.232751    9116 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:11.234430    9116 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:11.234433    9116 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:11.235574    9116 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:11.235976    9116 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:11.236987    9116 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:11.237081    9116 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:11.237951    9116 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:11.238651    9116 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:11.687928    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1211 15:34:11.700774    9116 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1211 15:34:11.700991    9116 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1211 15:34:11.701044    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1211 15:34:11.712041    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1211 15:34:11.712204    9116 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1211 15:34:11.713832    9116 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1211 15:34:11.713848    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1211 15:34:11.725815    9116 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1211 15:34:11.725834    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1211 15:34:11.729973    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:11.753116    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:11.771078    9116 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1211 15:34:11.771140    9116 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1211 15:34:11.771157    9116 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:11.771225    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1211 15:34:11.772900    9116 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1211 15:34:11.772921    9116 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:11.772971    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1211 15:34:11.784862    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1211 15:34:11.784896    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1211 15:34:11.800271    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:11.811634    9116 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1211 15:34:11.811661    9116 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:11.811738    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1211 15:34:11.822526    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1211 15:34:11.857322    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:11.868091    9116 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1211 15:34:11.868121    9116 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:11.868189    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1211 15:34:11.878160    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1211 15:34:11.936863    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:11.948230    9116 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1211 15:34:11.948250    9116 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:11.948320    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1211 15:34:11.959052    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W1211 15:34:12.002414    9116 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1211 15:34:12.002569    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:12.015555    9116 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1211 15:34:12.015576    9116 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:12.015637    9116 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1211 15:34:12.025188    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1211 15:34:12.025329    9116 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1211 15:34:12.026853    9116 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1211 15:34:12.026865    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1211 15:34:12.071184    9116 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1211 15:34:12.071197    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1211 15:34:12.108622    9116 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1211 15:34:12.722730    9116 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1211 15:34:12.722897    9116 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:12.738173    9116 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1211 15:34:12.738204    9116 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:12.738272    9116 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:34:12.754776    9116 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1211 15:34:12.754929    9116 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1211 15:34:12.756346    9116 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1211 15:34:12.756364    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1211 15:34:12.788395    9116 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1211 15:34:12.788410    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1211 15:34:13.025514    9116 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1211 15:34:13.025562    9116 cache_images.go:92] duration metric: took 1.806841083s to LoadCachedImages
	W1211 15:34:13.025797    9116 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1211 15:34:13.025806    9116 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1211 15:34:13.025995    9116 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-684000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 15:34:13.026073    9116 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1211 15:34:13.044144    9116 cni.go:84] Creating CNI manager for ""
	I1211 15:34:13.044160    9116 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:34:13.044390    9116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1211 15:34:13.044404    9116 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-684000 NodeName:stopped-upgrade-684000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 15:34:13.044483    9116 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-684000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 15:34:13.044558    9116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1211 15:34:13.047949    9116 binaries.go:44] Found k8s binaries, skipping transfer
	I1211 15:34:13.048021    9116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 15:34:13.051326    9116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1211 15:34:13.057207    9116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 15:34:13.063172    9116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1211 15:34:13.069609    9116 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1211 15:34:13.071122    9116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 15:34:13.075036    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:34:13.137810    9116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 15:34:13.148471    9116 certs.go:68] Setting up /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000 for IP: 10.0.2.15
	I1211 15:34:13.148480    9116 certs.go:194] generating shared ca certs ...
	I1211 15:34:13.148490    9116 certs.go:226] acquiring lock for ca certs: {Name:mk9a2f9aee3b15a0ae3e213800d46f88db78207a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:13.148877    9116 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.key
	I1211 15:34:13.148989    9116 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/proxy-client-ca.key
	I1211 15:34:13.149119    9116 certs.go:256] generating profile certs ...
	I1211 15:34:13.149280    9116 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/client.key
	I1211 15:34:13.149294    9116 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.key.f50424f9
	I1211 15:34:13.149305    9116 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.crt.f50424f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1211 15:34:13.260791    9116 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.crt.f50424f9 ...
	I1211 15:34:13.260830    9116 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.crt.f50424f9: {Name:mk1cc3a9ab509aafe3dba5606719792a1c165d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:13.261415    9116 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.key.f50424f9 ...
	I1211 15:34:13.261421    9116 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.key.f50424f9: {Name:mk906a74d2dc360661e7ccf4c6ed3103ec30a937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:13.261604    9116 certs.go:381] copying /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.crt.f50424f9 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.crt
	I1211 15:34:13.261727    9116 certs.go:385] copying /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.key.f50424f9 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.key
	I1211 15:34:13.261978    9116 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/proxy-client.key
	I1211 15:34:13.262155    9116 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/7135.pem (1338 bytes)
	W1211 15:34:13.262341    9116 certs.go:480] ignoring /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/7135_empty.pem, impossibly tiny 0 bytes
	I1211 15:34:13.262348    9116 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 15:34:13.262369    9116 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem (1078 bytes)
	I1211 15:34:13.262387    9116 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem (1123 bytes)
	I1211 15:34:13.262405    9116 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/key.pem (1675 bytes)
	I1211 15:34:13.262442    9116 certs.go:484] found cert: /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem (1708 bytes)
	I1211 15:34:13.263788    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 15:34:13.270610    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 15:34:13.277577    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 15:34:13.285122    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1211 15:34:13.292563    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1211 15:34:13.299683    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 15:34:13.306107    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 15:34:13.313190    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1211 15:34:13.320358    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/ssl/certs/71352.pem --> /usr/share/ca-certificates/71352.pem (1708 bytes)
	I1211 15:34:13.326736    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 15:34:13.333599    9116 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/7135.pem --> /usr/share/ca-certificates/7135.pem (1338 bytes)
	I1211 15:34:13.340942    9116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 15:34:13.346437    9116 ssh_runner.go:195] Run: openssl version
	I1211 15:34:13.348297    9116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71352.pem && ln -fs /usr/share/ca-certificates/71352.pem /etc/ssl/certs/71352.pem"
	I1211 15:34:13.351298    9116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71352.pem
	I1211 15:34:13.352614    9116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:22 /usr/share/ca-certificates/71352.pem
	I1211 15:34:13.352638    9116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71352.pem
	I1211 15:34:13.354393    9116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71352.pem /etc/ssl/certs/3ec20f2e.0"
	I1211 15:34:13.357386    9116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1211 15:34:13.360513    9116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 15:34:13.361939    9116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I1211 15:34:13.361967    9116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 15:34:13.363838    9116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1211 15:34:13.366623    9116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7135.pem && ln -fs /usr/share/ca-certificates/7135.pem /etc/ssl/certs/7135.pem"
	I1211 15:34:13.369893    9116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7135.pem
	I1211 15:34:13.371377    9116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:22 /usr/share/ca-certificates/7135.pem
	I1211 15:34:13.371402    9116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7135.pem
	I1211 15:34:13.373081    9116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7135.pem /etc/ssl/certs/51391683.0"
	I1211 15:34:13.376423    9116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 15:34:13.377980    9116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 15:34:13.379963    9116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 15:34:13.381771    9116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 15:34:13.383681    9116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 15:34:13.385483    9116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 15:34:13.387290    9116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 15:34:13.389202    9116 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61417 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1211 15:34:13.389282    9116 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1211 15:34:13.399433    9116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 15:34:13.402431    9116 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1211 15:34:13.402436    9116 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1211 15:34:13.402467    9116 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 15:34:13.405672    9116 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 15:34:13.405906    9116 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-684000" does not appear in /Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:34:13.405925    9116 kubeconfig.go:62] /Users/jenkins/minikube-integration/20083-6627/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-684000" cluster setting kubeconfig missing "stopped-upgrade-684000" context setting]
	I1211 15:34:13.406116    9116 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/kubeconfig: {Name:mkbb4a262cd8684046b6244fd6ca1d80f2c17ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:34:13.407971    9116 kapi.go:59] client config for stopped-upgrade-684000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/client.key", CAFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065580b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 15:34:13.413321    9116 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 15:34:13.416102    9116 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-684000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1211 15:34:13.416111    9116 kubeadm.go:1160] stopping kube-system containers ...
	I1211 15:34:13.416176    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1211 15:34:13.426629    9116 docker.go:483] Stopping containers: [75ea3383cdcb f6ac5f0dd06f a36cfd33e9ad b21ec5886c57 ce6d2e2ea14f 42fa55656c01 fce2dc366bd4 081582cc5331]
	I1211 15:34:13.426703    9116 ssh_runner.go:195] Run: docker stop 75ea3383cdcb f6ac5f0dd06f a36cfd33e9ad b21ec5886c57 ce6d2e2ea14f 42fa55656c01 fce2dc366bd4 081582cc5331
	I1211 15:34:13.437182    9116 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1211 15:34:13.443057    9116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 15:34:13.445924    9116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 15:34:13.445932    9116 kubeadm.go:157] found existing configuration files:
	
	I1211 15:34:13.445963    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/admin.conf
	I1211 15:34:13.448833    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 15:34:13.448862    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 15:34:13.451329    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/kubelet.conf
	I1211 15:34:13.453928    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 15:34:13.453964    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 15:34:13.456942    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/controller-manager.conf
	I1211 15:34:13.459617    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 15:34:13.459651    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 15:34:13.462156    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/scheduler.conf
	I1211 15:34:13.464949    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 15:34:13.464970    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 15:34:13.467672    9116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 15:34:13.470275    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:13.493880    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:13.925918    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:14.039704    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:14.070382    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1211 15:34:14.092621    9116 api_server.go:52] waiting for apiserver process to appear ...
	I1211 15:34:14.092709    9116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:34:14.594894    9116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:34:15.094758    9116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:34:15.111206    9116 api_server.go:72] duration metric: took 1.018616875s to wait for apiserver process to appear ...
	I1211 15:34:15.111220    9116 api_server.go:88] waiting for apiserver healthz status ...
	I1211 15:34:15.111230    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:20.114526    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:20.114637    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:25.115653    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:25.115674    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:30.116393    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:30.116410    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:35.117343    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:35.117393    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:40.118854    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:40.118891    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:45.120634    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:45.120660    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:50.122746    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:50.122792    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:34:55.123516    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:34:55.123563    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:00.125885    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:00.125982    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:05.128405    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:05.128442    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:10.130656    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:10.130752    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:15.132004    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:15.133079    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:15.150368    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:35:15.150469    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:15.163213    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:35:15.163303    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:15.174265    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:35:15.174344    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:15.188953    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:35:15.189040    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:15.199578    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:35:15.199663    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:15.210840    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:35:15.210914    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:15.221384    9116 logs.go:282] 0 containers: []
	W1211 15:35:15.221395    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:15.221461    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:15.231848    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:35:15.231875    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:15.231880    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:15.269455    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:35:15.269465    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:35:15.283049    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:35:15.283065    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:35:15.301539    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:35:15.301552    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:35:15.312816    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:15.312829    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:15.317000    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:15.317007    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:15.425445    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:35:15.425459    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:35:15.457226    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:35:15.457239    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:15.468937    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:35:15.468949    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:35:15.482858    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:35:15.482869    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:35:15.500307    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:35:15.500318    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:35:15.511642    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:35:15.511655    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:35:15.526798    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:35:15.526808    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:35:15.538049    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:35:15.538059    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:35:15.549143    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:35:15.549153    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:35:15.567057    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:35:15.567069    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:35:15.582101    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:15.582112    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:18.107782    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:23.110038    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:23.110550    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:23.149486    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:35:23.149650    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:23.170510    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:35:23.170647    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:23.188106    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:35:23.188207    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:23.200531    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:35:23.200625    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:23.214403    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:35:23.214477    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:23.225123    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:35:23.225210    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:23.235821    9116 logs.go:282] 0 containers: []
	W1211 15:35:23.235835    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:23.235911    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:23.246465    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:35:23.246485    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:35:23.246490    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:35:23.261373    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:35:23.261382    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:35:23.286513    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:35:23.286525    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:35:23.303759    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:35:23.303771    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:35:23.314923    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:35:23.314934    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:35:23.326713    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:35:23.326725    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:35:23.343992    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:35:23.344003    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:35:23.355680    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:23.355694    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:23.392958    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:23.392968    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:23.435441    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:35:23.435452    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:35:23.447390    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:35:23.447400    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:35:23.462331    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:23.462340    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:23.467611    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:35:23.467620    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:35:23.481258    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:35:23.481269    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:35:23.493928    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:23.493943    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:23.519680    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:35:23.519690    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:23.532410    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:35:23.532421    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:35:26.048119    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:31.050413    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:31.050710    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:31.075751    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:35:31.075870    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:31.092507    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:35:31.092603    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:31.111100    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:35:31.111183    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:31.121924    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:35:31.122017    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:31.131811    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:35:31.131891    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:31.142900    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:35:31.142977    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:31.154022    9116 logs.go:282] 0 containers: []
	W1211 15:35:31.154035    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:31.154103    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:31.164955    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:35:31.164973    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:35:31.164979    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:35:31.190511    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:35:31.190523    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:35:31.204003    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:35:31.204015    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:35:31.229128    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:35:31.229143    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:35:31.244247    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:31.244258    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:31.248766    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:31.248775    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:31.285491    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:35:31.285503    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:35:31.300930    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:35:31.300941    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:35:31.312633    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:35:31.312646    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:35:31.326863    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:35:31.326873    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:35:31.344311    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:35:31.344322    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:31.356731    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:31.356756    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:31.396114    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:35:31.396123    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:35:31.410121    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:35:31.410131    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:35:31.424493    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:35:31.424506    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:35:31.445953    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:35:31.445966    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:35:31.457040    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:31.457050    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:33.984375    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:38.985100    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:38.985308    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:39.000952    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:35:39.001048    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:39.013682    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:35:39.013762    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:39.024889    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:35:39.024968    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:39.035725    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:35:39.035811    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:39.046412    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:35:39.046489    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:39.057238    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:35:39.057315    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:39.067012    9116 logs.go:282] 0 containers: []
	W1211 15:35:39.067025    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:39.067092    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:39.077778    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:35:39.077797    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:35:39.077803    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:35:39.091695    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:35:39.091710    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:35:39.103028    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:35:39.103042    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:35:39.117358    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:35:39.117368    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:35:39.131746    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:35:39.131757    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:35:39.150926    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:39.150937    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:39.189765    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:35:39.189777    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:35:39.218360    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:35:39.218373    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:35:39.232286    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:35:39.232297    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:35:39.252016    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:35:39.252027    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:35:39.263518    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:35:39.263529    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:35:39.275347    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:35:39.275358    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:35:39.293369    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:39.293383    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:39.297681    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:39.297687    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:39.332165    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:35:39.332175    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:39.343740    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:35:39.343751    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:35:39.355467    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:39.355477    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:41.882646    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:46.883081    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:46.883295    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:46.901263    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:35:46.901377    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:46.915025    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:35:46.915110    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:46.929339    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:35:46.929419    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:46.939922    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:35:46.939995    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:46.951410    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:35:46.951472    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:46.962443    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:35:46.962520    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:46.972551    9116 logs.go:282] 0 containers: []
	W1211 15:35:46.972565    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:46.972618    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:46.984034    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:35:46.984056    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:35:46.984062    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:35:47.013385    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:35:47.013396    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:35:47.027230    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:35:47.027243    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:35:47.039249    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:47.039263    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:47.043496    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:35:47.043505    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:35:47.057058    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:35:47.057069    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:35:47.068808    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:47.068819    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:47.108327    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:35:47.108339    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:35:47.122971    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:35:47.122982    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:35:47.137581    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:35:47.137594    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:35:47.154320    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:47.154330    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:47.177710    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:47.177716    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:47.216775    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:35:47.216787    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:35:47.230838    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:35:47.230848    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:35:47.242065    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:35:47.242077    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:35:47.256457    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:35:47.256468    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:35:47.267524    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:35:47.267536    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:49.781424    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:35:54.783580    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:35:54.783838    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:35:54.807313    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:35:54.807447    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:35:54.823302    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:35:54.823402    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:35:54.839523    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:35:54.839596    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:35:54.850174    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:35:54.850258    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:35:54.861494    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:35:54.861571    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:35:54.872426    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:35:54.872502    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:35:54.883353    9116 logs.go:282] 0 containers: []
	W1211 15:35:54.883364    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:35:54.883432    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:35:54.893924    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:35:54.893943    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:35:54.893948    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:35:54.931508    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:35:54.931517    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:35:54.935608    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:35:54.935615    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:35:54.970431    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:35:54.970443    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:35:54.985143    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:35:54.985159    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:35:54.996739    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:35:54.996751    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:35:55.011246    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:35:55.011257    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:35:55.022563    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:35:55.022575    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:35:55.050241    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:35:55.050252    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:35:55.061335    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:35:55.061347    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:35:55.075458    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:35:55.075469    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:35:55.087635    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:35:55.087646    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:35:55.101339    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:35:55.101350    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:35:55.112642    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:35:55.112653    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:35:55.126070    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:35:55.126082    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:35:55.140877    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:35:55.140888    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:35:55.158958    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:35:55.158968    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:35:57.684248    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:02.686685    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:02.686970    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:02.717418    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:02.717532    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:02.731096    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:02.731182    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:02.744137    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:02.744242    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:02.754741    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:02.754836    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:02.765032    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:02.765117    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:02.777239    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:02.777324    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:02.787642    9116 logs.go:282] 0 containers: []
	W1211 15:36:02.787653    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:02.787713    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:02.798035    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:02.798056    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:02.798062    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:02.822150    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:02.822162    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:02.833648    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:02.833663    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:02.858998    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:02.859008    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:02.870544    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:02.870557    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:02.911066    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:02.911077    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:02.936495    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:02.936508    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:02.952361    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:02.952373    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:02.971373    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:02.971385    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:02.983511    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:02.983524    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:02.998174    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:02.998186    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:03.010525    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:03.010537    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:03.014793    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:03.014802    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:03.050556    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:03.050571    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:03.062386    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:03.062397    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:03.079900    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:03.079910    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:03.094083    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:03.094094    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:05.609893    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:10.610235    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:10.610664    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:10.642381    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:10.642540    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:10.662167    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:10.662283    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:10.676659    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:10.676754    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:10.689064    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:10.689141    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:10.699601    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:10.699684    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:10.710834    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:10.710919    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:10.720829    9116 logs.go:282] 0 containers: []
	W1211 15:36:10.720848    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:10.720920    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:10.731458    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:10.731475    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:10.731481    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:10.772228    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:10.772240    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:10.786602    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:10.786613    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:10.811249    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:10.811260    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:10.825286    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:10.825297    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:10.837485    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:10.837497    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:10.841565    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:10.841572    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:10.855996    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:10.856006    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:10.876653    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:10.876665    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:10.889353    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:10.889364    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:10.928548    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:10.928563    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:10.943182    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:10.943197    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:10.954627    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:10.954638    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:10.967743    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:10.967753    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:10.993292    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:10.993308    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:11.006803    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:11.006815    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:11.021667    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:11.021681    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:13.535590    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:18.538184    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:18.538742    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:18.576424    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:18.576586    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:18.598632    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:18.598745    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:18.613715    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:18.613807    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:18.626221    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:18.626306    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:18.641288    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:18.641372    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:18.652122    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:18.652197    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:18.662568    9116 logs.go:282] 0 containers: []
	W1211 15:36:18.662580    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:18.662640    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:18.673382    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:18.673402    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:18.673408    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:18.699815    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:18.699829    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:18.714149    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:18.714164    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:18.752943    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:18.752951    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:18.767360    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:18.767372    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:18.792070    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:18.792082    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:18.848478    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:18.848493    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:18.862954    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:18.862966    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:18.874803    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:18.874816    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:18.886629    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:18.886642    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:18.899511    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:18.899522    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:18.904155    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:18.904164    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:18.918809    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:18.918820    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:18.930517    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:18.930528    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:18.944895    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:18.944905    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:18.968209    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:18.968219    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:18.983246    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:18.983262    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:21.498244    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:26.500761    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:26.501064    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:26.527692    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:26.527830    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:26.542533    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:26.542632    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:26.554838    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:26.554925    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:26.566049    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:26.566131    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:26.576336    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:26.576411    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:26.588197    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:26.588280    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:26.598296    9116 logs.go:282] 0 containers: []
	W1211 15:36:26.598306    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:26.598370    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:26.609093    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:26.609110    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:26.609115    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:26.644659    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:26.644669    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:26.670134    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:26.670145    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:26.681320    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:26.681333    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:26.695689    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:26.695701    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:26.709451    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:26.709463    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:26.721084    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:26.721094    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:26.732677    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:26.732687    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:26.756026    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:26.756035    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:26.768021    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:26.768034    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:26.806914    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:26.806923    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:26.810913    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:26.810922    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:26.825112    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:26.825122    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:26.842080    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:26.842094    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:26.856745    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:26.856761    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:26.870699    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:26.870711    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:26.882803    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:26.882814    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:29.395073    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:34.397261    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:34.397470    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:34.413384    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:34.413493    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:34.426202    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:34.426289    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:34.437245    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:34.437326    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:34.447706    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:34.447788    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:34.458470    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:34.458553    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:34.468902    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:34.468981    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:34.479419    9116 logs.go:282] 0 containers: []
	W1211 15:36:34.479431    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:34.479498    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:34.489838    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:34.489861    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:34.489867    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:34.507847    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:34.507861    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:34.512232    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:34.512238    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:34.539612    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:34.539624    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:34.553293    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:34.553306    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:34.565302    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:34.565313    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:34.582010    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:34.582020    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:34.593614    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:34.593623    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:34.605474    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:34.605487    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:34.640134    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:34.640145    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:34.654322    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:34.654333    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:34.665971    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:34.665982    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:34.680057    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:34.680068    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:34.694103    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:34.694113    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:34.733451    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:34.733460    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:34.748345    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:34.748354    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:34.760017    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:34.760027    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:37.286160    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:42.287553    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:42.287739    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:42.301184    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:42.301272    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:42.312349    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:42.312432    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:42.322865    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:42.322948    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:42.333545    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:42.333630    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:42.344208    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:42.344282    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:42.354242    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:42.354317    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:42.364490    9116 logs.go:282] 0 containers: []
	W1211 15:36:42.364503    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:42.364577    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:42.375222    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:42.375238    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:42.375244    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:42.411641    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:42.411649    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:42.415674    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:42.415681    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:42.427758    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:42.427769    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:42.439917    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:42.439930    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:42.474694    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:42.474706    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:42.488913    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:42.488923    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:42.500242    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:42.500253    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:42.519399    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:42.519409    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:42.534138    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:42.534149    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:42.545751    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:42.545761    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:42.559937    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:42.559947    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:42.573801    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:42.573810    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:42.585561    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:42.585572    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:42.610338    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:42.610348    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:42.621839    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:42.621850    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:42.635785    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:42.635795    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:45.161124    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:50.163448    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:50.163693    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:50.180004    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:50.180105    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:50.192921    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:50.193002    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:50.207335    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:50.207423    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:50.217943    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:50.218027    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:50.228765    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:50.228842    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:50.239031    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:50.239114    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:50.248869    9116 logs.go:282] 0 containers: []
	W1211 15:36:50.248881    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:50.248953    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:50.260681    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:50.260699    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:50.260705    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:50.275998    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:50.276007    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:50.300449    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:50.300460    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:36:50.312735    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:50.312746    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:50.327155    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:50.327165    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:50.338584    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:50.338597    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:50.362575    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:50.362586    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:50.380384    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:50.380397    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:50.391677    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:50.391689    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:50.426755    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:50.426768    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:50.440569    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:50.440581    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:50.451756    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:50.451766    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:50.463274    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:50.463285    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:50.478445    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:50.478456    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:50.482565    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:50.482574    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:50.497744    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:50.497761    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:50.534268    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:50.534276    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:53.047900    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:36:58.050506    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:36:58.050805    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:36:58.075695    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:36:58.075829    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:36:58.095092    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:36:58.095191    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:36:58.111717    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:36:58.111806    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:36:58.122124    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:36:58.122199    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:36:58.132517    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:36:58.132592    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:36:58.152329    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:36:58.152409    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:36:58.163308    9116 logs.go:282] 0 containers: []
	W1211 15:36:58.163320    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:36:58.163386    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:36:58.174249    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:36:58.174270    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:36:58.174276    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:36:58.189634    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:36:58.189645    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:36:58.209058    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:36:58.209073    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:36:58.223470    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:36:58.223482    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:36:58.246518    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:36:58.246530    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:36:58.250533    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:36:58.250540    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:36:58.285159    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:36:58.285172    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:36:58.299680    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:36:58.299691    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:36:58.310849    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:36:58.310860    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:36:58.322574    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:36:58.322584    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:36:58.339796    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:36:58.339809    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:36:58.351254    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:36:58.351269    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:36:58.390490    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:36:58.390498    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:36:58.407486    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:36:58.407499    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:36:58.432624    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:36:58.432635    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:36:58.444094    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:36:58.444106    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:36:58.455957    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:36:58.455969    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:00.970187    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:05.972407    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:05.972593    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:05.984615    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:05.984698    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:05.995230    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:05.995313    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:06.005994    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:06.006072    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:06.016423    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:06.016503    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:06.031738    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:06.031812    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:06.042057    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:06.042137    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:06.052812    9116 logs.go:282] 0 containers: []
	W1211 15:37:06.052830    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:06.052894    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:06.063423    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:06.063442    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:06.063447    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:06.076993    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:06.077004    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:06.091634    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:06.091647    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:06.102933    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:06.102943    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:06.117963    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:06.117978    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:06.131782    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:06.131793    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:06.150844    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:06.150855    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:06.164556    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:06.164568    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:06.184599    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:06.184618    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:06.225460    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:06.225481    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:06.229953    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:06.229960    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:06.266386    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:06.266399    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:06.291403    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:06.291414    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:06.303408    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:06.303418    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:06.315171    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:06.315181    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:06.335910    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:06.335921    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:06.348221    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:06.348234    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:08.875345    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:13.877896    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:13.878191    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:13.912333    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:13.912460    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:13.933772    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:13.933867    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:13.946113    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:13.946198    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:13.956370    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:13.956453    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:13.967112    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:13.967190    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:13.977745    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:13.977826    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:13.988142    9116 logs.go:282] 0 containers: []
	W1211 15:37:13.988159    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:13.988225    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:13.998275    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:13.998293    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:13.998299    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:14.009664    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:14.009680    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:14.023178    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:14.023189    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:14.040297    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:14.040307    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:14.053841    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:14.053851    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:14.068759    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:14.068771    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:14.083768    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:14.083778    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:14.095314    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:14.095325    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:14.106536    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:14.106547    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:14.143470    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:14.143501    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:14.148358    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:14.148368    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:14.184373    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:14.184383    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:14.196013    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:14.196027    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:14.214882    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:14.214893    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:14.227427    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:14.227438    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:14.255395    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:14.255407    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:14.267211    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:14.267220    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:16.792846    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:21.795359    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:21.795685    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:21.826444    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:21.826599    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:21.849859    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:21.849970    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:21.863317    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:21.863405    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:21.875000    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:21.875090    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:21.885844    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:21.885926    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:21.896571    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:21.896656    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:21.910121    9116 logs.go:282] 0 containers: []
	W1211 15:37:21.910132    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:21.910205    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:21.921049    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:21.921069    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:21.921075    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:21.946289    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:21.946301    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:21.961918    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:21.961927    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:21.976641    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:21.976652    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:21.988336    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:21.988347    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:21.992541    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:21.992550    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:22.005269    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:22.005281    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:22.042445    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:22.042465    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:22.079249    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:22.079259    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:22.095066    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:22.095077    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:22.112939    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:22.112952    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:22.137753    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:22.137760    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:22.151880    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:22.151890    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:22.172944    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:22.172954    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:22.184819    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:22.184830    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:22.202940    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:22.202952    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:22.217375    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:22.217388    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:24.729198    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:29.731702    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:29.731965    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:29.755364    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:29.755503    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:29.771159    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:29.771264    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:29.787669    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:29.787748    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:29.798105    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:29.798183    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:29.808490    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:29.808570    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:29.825735    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:29.825813    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:29.835680    9116 logs.go:282] 0 containers: []
	W1211 15:37:29.835692    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:29.835755    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:29.849112    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:29.849130    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:29.849136    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:29.860790    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:29.860802    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:29.872792    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:29.872805    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:29.887579    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:29.887589    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:29.906047    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:29.906059    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:29.921663    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:29.921673    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:29.933683    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:29.933695    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:29.972608    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:29.972620    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:29.987520    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:29.987536    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:29.999137    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:29.999150    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:30.013139    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:30.013150    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:30.035236    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:30.035245    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:30.039828    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:30.039837    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:30.053184    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:30.053195    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:30.077504    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:30.077515    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:30.113093    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:30.113104    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:30.126921    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:30.126931    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:32.640545    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:37.641501    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:37.641705    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:37.659534    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:37.659632    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:37.672691    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:37.672785    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:37.683917    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:37.683999    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:37.694005    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:37.694083    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:37.704462    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:37.704543    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:37.717892    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:37.717972    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:37.730391    9116 logs.go:282] 0 containers: []
	W1211 15:37:37.730404    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:37.730473    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:37.740991    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:37.741008    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:37.741013    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:37.764734    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:37.764741    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:37.802243    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:37.802252    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:37.816603    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:37.816614    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:37.830206    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:37.830216    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:37.841427    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:37.841436    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:37.852724    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:37.852733    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:37.856809    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:37.856819    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:37.891327    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:37.891340    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:37.912597    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:37.912608    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:37.930327    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:37.930337    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:37.942362    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:37.942374    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:37.957268    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:37.957279    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:37.968457    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:37.968468    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:37.983047    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:37.983059    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:37.994867    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:37.994878    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:38.020235    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:38.020246    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:40.533747    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:45.536073    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:45.536697    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:45.575385    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:45.575553    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:45.596243    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:45.596360    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:45.611536    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:45.611633    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:45.629655    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:45.629758    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:45.640557    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:45.640637    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:45.651753    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:45.651836    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:45.665520    9116 logs.go:282] 0 containers: []
	W1211 15:37:45.665533    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:45.665598    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:45.676218    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:45.676237    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:45.676242    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:45.690142    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:45.690153    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:45.701642    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:45.701651    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:45.705844    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:45.705851    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:45.730208    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:45.730219    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:45.743161    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:45.743171    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:45.757024    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:45.757035    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:45.768457    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:45.768469    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:45.806666    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:45.806676    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:45.820775    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:45.820786    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:45.835392    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:45.835402    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:45.847494    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:45.847507    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:45.865245    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:45.865256    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:45.879962    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:45.879973    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:45.918947    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:45.918960    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:45.942145    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:45.942154    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:45.954002    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:45.954012    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:48.466390    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:37:53.466993    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:37:53.467466    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:37:53.496710    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:37:53.496849    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:37:53.520281    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:37:53.520376    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:37:53.533287    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:37:53.533362    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:37:53.544258    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:37:53.544337    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:37:53.554508    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:37:53.554585    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:37:53.564966    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:37:53.565036    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:37:53.575274    9116 logs.go:282] 0 containers: []
	W1211 15:37:53.575290    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:37:53.575353    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:37:53.585855    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:37:53.585872    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:37:53.585878    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:37:53.599934    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:37:53.599947    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:37:53.611746    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:37:53.611756    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:37:53.633660    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:37:53.633667    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:37:53.638142    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:37:53.638148    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:37:53.675013    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:37:53.675023    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:37:53.690167    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:37:53.690180    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:37:53.705867    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:37:53.705882    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:37:53.718170    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:37:53.718181    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:37:53.732601    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:37:53.732616    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:37:53.757752    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:37:53.757762    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:37:53.769444    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:37:53.769453    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:37:53.780962    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:37:53.780971    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:37:53.820894    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:37:53.820903    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:37:53.839925    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:37:53.839938    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:37:53.854402    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:37:53.854414    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:37:53.869820    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:37:53.869831    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:37:56.383992    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:01.386123    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:01.386375    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:38:01.403147    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:38:01.403248    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:38:01.416206    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:38:01.416289    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:38:01.426923    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:38:01.427007    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:38:01.437654    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:38:01.437752    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:38:01.447709    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:38:01.447788    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:38:01.458313    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:38:01.458404    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:38:01.469108    9116 logs.go:282] 0 containers: []
	W1211 15:38:01.469120    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:38:01.469186    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:38:01.479610    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:38:01.479632    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:38:01.479638    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:38:01.483769    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:38:01.483779    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:38:01.521831    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:38:01.521842    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:38:01.538646    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:38:01.538657    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:38:01.550344    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:38:01.550358    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:38:01.562033    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:38:01.562043    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:38:01.588033    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:38:01.588044    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:38:01.602122    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:38:01.602135    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:38:01.616532    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:38:01.616542    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:38:01.650420    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:38:01.650437    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:38:01.668337    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:38:01.668349    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:38:01.687068    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:38:01.687080    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:38:01.710417    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:38:01.710425    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:38:01.751033    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:38:01.751042    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:38:01.768575    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:38:01.768586    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:38:01.780003    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:38:01.780015    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:38:01.796799    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:38:01.796814    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:38:04.313314    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:09.315477    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:09.315672    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:38:09.327314    9116 logs.go:282] 2 containers: [f4b1d8c80fa2 75ea3383cdcb]
	I1211 15:38:09.327405    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:38:09.338436    9116 logs.go:282] 2 containers: [39eeb1c4ec7f f6ac5f0dd06f]
	I1211 15:38:09.338527    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:38:09.348799    9116 logs.go:282] 1 containers: [56bc0d0402f2]
	I1211 15:38:09.348883    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:38:09.359661    9116 logs.go:282] 2 containers: [e83416f284e8 a36cfd33e9ad]
	I1211 15:38:09.359750    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:38:09.370406    9116 logs.go:282] 1 containers: [28df36aa1cb0]
	I1211 15:38:09.370487    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:38:09.380763    9116 logs.go:282] 2 containers: [f360853fe5f7 ce6d2e2ea14f]
	I1211 15:38:09.380846    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:38:09.391629    9116 logs.go:282] 0 containers: []
	W1211 15:38:09.391643    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:38:09.391710    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:38:09.407115    9116 logs.go:282] 2 containers: [b975235ecc20 29bff23caad5]
	I1211 15:38:09.407134    9116 logs.go:123] Gathering logs for kube-proxy [28df36aa1cb0] ...
	I1211 15:38:09.407140    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28df36aa1cb0"
	I1211 15:38:09.419032    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:38:09.419047    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:38:09.423142    9116 logs.go:123] Gathering logs for kube-apiserver [f4b1d8c80fa2] ...
	I1211 15:38:09.423150    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4b1d8c80fa2"
	I1211 15:38:09.439366    9116 logs.go:123] Gathering logs for kube-controller-manager [ce6d2e2ea14f] ...
	I1211 15:38:09.439378    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6d2e2ea14f"
	I1211 15:38:09.453255    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:38:09.453267    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:38:09.476910    9116 logs.go:123] Gathering logs for kube-scheduler [e83416f284e8] ...
	I1211 15:38:09.476921    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83416f284e8"
	I1211 15:38:09.489300    9116 logs.go:123] Gathering logs for kube-scheduler [a36cfd33e9ad] ...
	I1211 15:38:09.489311    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36cfd33e9ad"
	I1211 15:38:09.503816    9116 logs.go:123] Gathering logs for etcd [39eeb1c4ec7f] ...
	I1211 15:38:09.503831    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39eeb1c4ec7f"
	I1211 15:38:09.518390    9116 logs.go:123] Gathering logs for etcd [f6ac5f0dd06f] ...
	I1211 15:38:09.518403    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ac5f0dd06f"
	I1211 15:38:09.536072    9116 logs.go:123] Gathering logs for coredns [56bc0d0402f2] ...
	I1211 15:38:09.536084    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56bc0d0402f2"
	I1211 15:38:09.568606    9116 logs.go:123] Gathering logs for kube-controller-manager [f360853fe5f7] ...
	I1211 15:38:09.568618    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f360853fe5f7"
	I1211 15:38:09.593829    9116 logs.go:123] Gathering logs for storage-provisioner [b975235ecc20] ...
	I1211 15:38:09.593845    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b975235ecc20"
	I1211 15:38:09.605540    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:38:09.605555    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:38:09.617288    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:38:09.617296    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:38:09.651795    9116 logs.go:123] Gathering logs for kube-apiserver [75ea3383cdcb] ...
	I1211 15:38:09.651811    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75ea3383cdcb"
	I1211 15:38:09.676613    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:38:09.676625    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:38:09.714612    9116 logs.go:123] Gathering logs for storage-provisioner [29bff23caad5] ...
	I1211 15:38:09.714621    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29bff23caad5"
	I1211 15:38:12.227877    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:17.230224    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:17.230310    9116 kubeadm.go:597] duration metric: took 4m3.835391125s to restartPrimaryControlPlane
	W1211 15:38:17.230376    9116 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1211 15:38:17.230400    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1211 15:38:18.315171    9116 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.084791667s)
	I1211 15:38:18.315246    9116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 15:38:18.320408    9116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 15:38:18.323291    9116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 15:38:18.325999    9116 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 15:38:18.326011    9116 kubeadm.go:157] found existing configuration files:
	
	I1211 15:38:18.326045    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/admin.conf
	I1211 15:38:18.328578    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 15:38:18.328611    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 15:38:18.331935    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/kubelet.conf
	I1211 15:38:18.334728    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 15:38:18.334753    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 15:38:18.337522    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/controller-manager.conf
	I1211 15:38:18.340318    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 15:38:18.340342    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 15:38:18.343580    9116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/scheduler.conf
	I1211 15:38:18.346094    9116 kubeadm.go:163] "https://control-plane.minikube.internal:61417" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61417 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 15:38:18.346123    9116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 15:38:18.348911    9116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 15:38:18.367790    9116 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1211 15:38:18.367827    9116 kubeadm.go:310] [preflight] Running pre-flight checks
	I1211 15:38:18.415909    9116 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 15:38:18.416020    9116 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 15:38:18.416095    9116 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1211 15:38:18.467008    9116 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 15:38:18.470996    9116 out.go:235]   - Generating certificates and keys ...
	I1211 15:38:18.471103    9116 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1211 15:38:18.471259    9116 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1211 15:38:18.471362    9116 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 15:38:18.471400    9116 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1211 15:38:18.471442    9116 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 15:38:18.471472    9116 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1211 15:38:18.471507    9116 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1211 15:38:18.471554    9116 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1211 15:38:18.471600    9116 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 15:38:18.471642    9116 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 15:38:18.471681    9116 kubeadm.go:310] [certs] Using the existing "sa" key
	I1211 15:38:18.471715    9116 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 15:38:18.517256    9116 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 15:38:18.606874    9116 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 15:38:18.824285    9116 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 15:38:18.885000    9116 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 15:38:18.912618    9116 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 15:38:18.912991    9116 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 15:38:18.913092    9116 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1211 15:38:18.985465    9116 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 15:38:18.989373    9116 out.go:235]   - Booting up control plane ...
	I1211 15:38:18.989418    9116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 15:38:18.989457    9116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 15:38:18.989508    9116 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 15:38:18.989558    9116 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 15:38:18.989643    9116 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1211 15:38:23.487682    9116 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501003 seconds
	I1211 15:38:23.487745    9116 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 15:38:23.491460    9116 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 15:38:24.002675    9116 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 15:38:24.002937    9116 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-684000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 15:38:24.506696    9116 kubeadm.go:310] [bootstrap-token] Using token: dsob3n.pl1rcy9wqctvzov5
	I1211 15:38:24.513125    9116 out.go:235]   - Configuring RBAC rules ...
	I1211 15:38:24.513186    9116 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 15:38:24.513241    9116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 15:38:24.515170    9116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 15:38:24.519567    9116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 15:38:24.520471    9116 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 15:38:24.521576    9116 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 15:38:24.526339    9116 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 15:38:24.682083    9116 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1211 15:38:24.910158    9116 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1211 15:38:24.910585    9116 kubeadm.go:310] 
	I1211 15:38:24.910613    9116 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1211 15:38:24.910617    9116 kubeadm.go:310] 
	I1211 15:38:24.910650    9116 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1211 15:38:24.910654    9116 kubeadm.go:310] 
	I1211 15:38:24.910664    9116 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1211 15:38:24.910692    9116 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 15:38:24.910722    9116 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 15:38:24.910728    9116 kubeadm.go:310] 
	I1211 15:38:24.910764    9116 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1211 15:38:24.910767    9116 kubeadm.go:310] 
	I1211 15:38:24.910796    9116 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 15:38:24.910799    9116 kubeadm.go:310] 
	I1211 15:38:24.910835    9116 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1211 15:38:24.910873    9116 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 15:38:24.910914    9116 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 15:38:24.910920    9116 kubeadm.go:310] 
	I1211 15:38:24.910959    9116 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 15:38:24.910995    9116 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1211 15:38:24.910998    9116 kubeadm.go:310] 
	I1211 15:38:24.911053    9116 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dsob3n.pl1rcy9wqctvzov5 \
	I1211 15:38:24.911106    9116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d49e2bb776362b8f3de097afdeb999a6cd72c9e172f75d4b314d4105a8117ae2 \
	I1211 15:38:24.911117    9116 kubeadm.go:310] 	--control-plane 
	I1211 15:38:24.911119    9116 kubeadm.go:310] 
	I1211 15:38:24.911167    9116 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1211 15:38:24.911171    9116 kubeadm.go:310] 
	I1211 15:38:24.911244    9116 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dsob3n.pl1rcy9wqctvzov5 \
	I1211 15:38:24.911295    9116 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d49e2bb776362b8f3de097afdeb999a6cd72c9e172f75d4b314d4105a8117ae2 
	I1211 15:38:24.911487    9116 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 15:38:24.911511    9116 cni.go:84] Creating CNI manager for ""
	I1211 15:38:24.911519    9116 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:38:24.915004    9116 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1211 15:38:24.922189    9116 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1211 15:38:24.925231    9116 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1211 15:38:24.930274    9116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 15:38:24.930334    9116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 15:38:24.930335    9116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-684000 minikube.k8s.io/updated_at=2024_12_11T15_38_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=stopped-upgrade-684000 minikube.k8s.io/primary=true
	I1211 15:38:24.973780    9116 ops.go:34] apiserver oom_adj: -16
	I1211 15:38:24.973777    9116 kubeadm.go:1113] duration metric: took 43.496542ms to wait for elevateKubeSystemPrivileges
	I1211 15:38:24.973818    9116 kubeadm.go:394] duration metric: took 4m11.592381375s to StartCluster
	I1211 15:38:24.973829    9116 settings.go:142] acquiring lock: {Name:mk7be6692255448ff6d4be3295ef81ca16b62a5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:38:24.974010    9116 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:38:24.974396    9116 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/kubeconfig: {Name:mkbb4a262cd8684046b6244fd6ca1d80f2c17ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:38:24.974718    9116 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:38:24.974809    9116 config.go:182] Loaded profile config "stopped-upgrade-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1211 15:38:24.974969    9116 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 15:38:24.975004    9116 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-684000"
	I1211 15:38:24.975013    9116 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-684000"
	W1211 15:38:24.975019    9116 addons.go:243] addon storage-provisioner should already be in state true
	I1211 15:38:24.975029    9116 host.go:66] Checking if "stopped-upgrade-684000" exists ...
	I1211 15:38:24.975042    9116 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-684000"
	I1211 15:38:24.975101    9116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-684000"
	I1211 15:38:24.975483    9116 retry.go:31] will retry after 1.423033113s: connect: dial unix /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/monitor: connect: connection refused
	I1211 15:38:24.979227    9116 out.go:177] * Verifying Kubernetes components...
	I1211 15:38:24.987157    9116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 15:38:24.991164    9116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 15:38:24.995204    9116 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 15:38:24.995212    9116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 15:38:24.995220    9116 sshutil.go:53] new ssh client: &{IP:localhost Port:61382 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/id_rsa Username:docker}
	I1211 15:38:25.060991    9116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 15:38:25.065840    9116 api_server.go:52] waiting for apiserver process to appear ...
	I1211 15:38:25.065885    9116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 15:38:25.069675    9116 api_server.go:72] duration metric: took 94.948167ms to wait for apiserver process to appear ...
	I1211 15:38:25.069684    9116 api_server.go:88] waiting for apiserver healthz status ...
	I1211 15:38:25.069692    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:25.091338    9116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 15:38:26.399824    9116 kapi.go:59] client config for stopped-upgrade-684000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/stopped-upgrade-684000/client.key", CAFile:"/Users/jenkins/minikube-integration/20083-6627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065580b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 15:38:26.400846    9116 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-684000"
	W1211 15:38:26.400852    9116 addons.go:243] addon default-storageclass should already be in state true
	I1211 15:38:26.400863    9116 host.go:66] Checking if "stopped-upgrade-684000" exists ...
	I1211 15:38:26.401535    9116 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 15:38:26.401542    9116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 15:38:26.401548    9116 sshutil.go:53] new ssh client: &{IP:localhost Port:61382 SSHKeyPath:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/stopped-upgrade-684000/id_rsa Username:docker}
	I1211 15:38:26.438210    9116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 15:38:26.515946    9116 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 15:38:26.515960    9116 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 15:38:30.071625    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:30.071684    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:35.071876    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:35.071906    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:40.072133    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:40.072154    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:45.072451    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:45.072519    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:50.072980    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:50.073057    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:38:55.073661    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:38:55.073684    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1211 15:38:56.518098    9116 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1211 15:38:56.521501    9116 out.go:177] * Enabled addons: storage-provisioner
	I1211 15:38:56.527294    9116 addons.go:510] duration metric: took 31.553426708s for enable addons: enabled=[storage-provisioner]
	I1211 15:39:00.074474    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:00.074524    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:05.074812    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:05.074871    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:10.075999    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:10.076034    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:15.077323    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:15.077347    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:20.079676    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:20.079738    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:25.081952    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:25.082073    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:25.093553    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:39:25.093638    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:25.104355    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:39:25.104427    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:25.119018    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:39:25.119096    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:25.129768    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:39:25.129849    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:25.140112    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:39:25.140192    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:25.150472    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:39:25.150557    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:25.160835    9116 logs.go:282] 0 containers: []
	W1211 15:39:25.160850    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:25.160921    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:25.171058    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:39:25.171075    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:39:25.171081    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:39:25.188613    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:25.188625    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:25.214438    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:39:25.214448    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:25.228379    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:25.228391    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:25.264105    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:39:25.264116    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:39:25.278014    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:39:25.278027    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:39:25.300586    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:39:25.300599    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:39:25.311861    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:39:25.311870    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:39:25.323664    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:39:25.323677    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:39:25.338070    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:39:25.338081    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:39:25.350509    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:39:25.350519    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:39:25.361610    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:25.361620    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:25.396835    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:25.396847    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:27.903596    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:32.906044    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:32.906188    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:32.919037    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:39:32.919123    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:32.930540    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:39:32.930622    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:32.940689    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:39:32.940774    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:32.950969    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:39:32.951049    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:32.961345    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:39:32.961449    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:32.971719    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:39:32.971806    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:32.986377    9116 logs.go:282] 0 containers: []
	W1211 15:39:32.986389    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:32.986452    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:32.998513    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:39:32.998527    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:39:32.998532    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:39:33.012623    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:39:33.012635    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:39:33.028353    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:39:33.028362    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:39:33.043323    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:39:33.043338    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:39:33.061109    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:33.061119    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:33.065479    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:33.065486    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:33.103196    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:39:33.103210    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:39:33.123764    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:39:33.123777    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:39:33.135574    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:39:33.135586    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:39:33.147535    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:33.147548    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:33.172771    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:39:33.172782    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:33.184602    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:33.184615    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:33.219654    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:39:33.219667    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:39:35.735983    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:40.738089    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:40.738413    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:40.761894    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:39:40.762030    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:40.777869    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:39:40.777953    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:40.790696    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:39:40.790783    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:40.801606    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:39:40.801678    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:40.812258    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:39:40.812334    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:40.822775    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:39:40.822852    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:40.833457    9116 logs.go:282] 0 containers: []
	W1211 15:39:40.833471    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:40.833532    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:40.844141    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:39:40.844157    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:40.844165    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:40.883506    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:39:40.883522    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:39:40.895618    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:39:40.895634    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:39:40.910460    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:39:40.910474    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:39:40.922966    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:39:40.922982    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:39:40.934462    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:40.934473    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:40.958442    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:39:40.958452    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:40.969384    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:40.969396    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:41.004539    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:41.004561    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:41.009328    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:39:41.009334    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:39:41.023966    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:39:41.023977    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:39:41.038873    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:39:41.038888    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:39:41.051471    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:39:41.051486    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:39:43.569342    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:48.569499    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:48.569733    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:48.585592    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:39:48.585687    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:48.597500    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:39:48.597588    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:48.608441    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:39:48.608526    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:48.619339    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:39:48.619446    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:48.634789    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:39:48.634874    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:48.645892    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:39:48.645971    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:48.658182    9116 logs.go:282] 0 containers: []
	W1211 15:39:48.658196    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:48.658267    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:48.668819    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:39:48.668835    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:39:48.668840    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:39:48.682987    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:39:48.682999    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:39:48.696176    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:39:48.696187    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:39:48.708139    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:39:48.708150    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:39:48.722830    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:39:48.722839    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:39:48.739133    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:48.739144    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:48.773127    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:48.773135    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:48.777284    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:48.777292    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:48.813974    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:48.813988    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:48.837473    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:39:48.837481    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:48.848575    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:39:48.848587    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:39:48.863613    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:39:48.863623    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:39:48.875300    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:39:48.875311    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:39:51.392045    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:39:56.394125    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:39:56.394430    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:39:56.419377    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:39:56.419490    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:39:56.433684    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:39:56.433772    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:39:56.445282    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:39:56.445383    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:39:56.455786    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:39:56.455868    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:39:56.466512    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:39:56.466591    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:39:56.476808    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:39:56.476889    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:39:56.489658    9116 logs.go:282] 0 containers: []
	W1211 15:39:56.489675    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:39:56.489739    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:39:56.500236    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:39:56.500257    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:39:56.500262    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:39:56.514554    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:39:56.514567    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:39:56.526000    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:39:56.526013    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:39:56.537819    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:39:56.537833    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:39:56.552498    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:39:56.552511    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:39:56.569900    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:39:56.569911    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:39:56.582661    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:39:56.582672    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:39:56.591783    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:39:56.591793    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:39:56.607026    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:39:56.607037    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:39:56.619128    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:39:56.619142    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:39:56.631186    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:39:56.631199    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:39:56.657147    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:39:56.657159    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:39:56.691985    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:39:56.691994    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:39:59.232716    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:04.234903    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:04.235422    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:04.275429    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:04.275591    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:04.295827    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:04.295935    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:04.310916    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:40:04.311005    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:04.323298    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:04.323372    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:04.333722    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:04.333814    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:04.344302    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:04.344380    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:04.354511    9116 logs.go:282] 0 containers: []
	W1211 15:40:04.354522    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:04.354589    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:04.370598    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:04.370615    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:04.370621    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:04.382793    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:04.382806    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:04.397324    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:04.397335    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:04.415162    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:04.415173    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:04.450602    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:04.450613    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:04.454915    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:04.454925    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:04.490856    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:04.490867    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:04.505403    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:04.505416    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:04.517454    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:04.517468    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:04.543305    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:04.543315    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:04.557602    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:04.557612    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:04.569584    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:04.569595    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:04.581243    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:04.581253    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:07.095539    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:12.097610    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:12.097916    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:12.125921    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:12.126033    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:12.141913    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:12.142008    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:12.158462    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:40:12.158549    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:12.168751    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:12.168835    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:12.179304    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:12.179383    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:12.190334    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:12.190416    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:12.200226    9116 logs.go:282] 0 containers: []
	W1211 15:40:12.200237    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:12.200305    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:12.211169    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:12.211184    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:12.211190    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:12.223411    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:12.223422    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:12.242381    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:12.242389    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:12.254084    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:12.254098    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:12.289870    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:12.289880    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:12.295390    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:12.295398    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:12.310352    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:12.310361    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:12.324071    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:12.324081    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:12.348543    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:12.348563    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:12.360180    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:12.360200    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:12.396382    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:12.396395    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:12.408740    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:12.408754    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:12.422452    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:12.422467    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:14.942974    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:19.945062    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:19.945244    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:19.959293    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:19.959380    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:19.970798    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:19.970885    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:19.981588    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:40:19.981667    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:19.991964    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:19.992034    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:20.003159    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:20.003248    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:20.013953    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:20.014027    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:20.028292    9116 logs.go:282] 0 containers: []
	W1211 15:40:20.028304    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:20.028372    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:20.045884    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:20.045901    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:20.045907    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:20.080738    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:20.080747    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:20.095296    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:20.095306    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:20.109812    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:20.109822    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:20.125345    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:20.125355    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:20.142468    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:20.142479    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:20.153628    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:20.153638    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:20.164997    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:20.165008    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:20.169264    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:20.169270    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:20.209180    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:20.209193    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:20.221007    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:20.221018    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:20.232595    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:20.232605    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:20.244006    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:20.244016    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:22.769269    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:27.769810    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:27.769943    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:27.786550    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:27.786637    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:27.798098    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:27.798186    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:27.808827    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:40:27.808903    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:27.819062    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:27.819154    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:27.829800    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:27.829882    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:27.840209    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:27.840289    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:27.850306    9116 logs.go:282] 0 containers: []
	W1211 15:40:27.850317    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:27.850375    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:27.861228    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:27.861248    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:27.861254    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:27.872796    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:27.872809    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:27.895641    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:27.895651    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:27.899714    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:27.899722    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:27.935049    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:27.935060    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:27.949064    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:27.949074    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:27.960603    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:27.960613    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:27.972523    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:27.972532    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:27.990961    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:27.990973    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:28.003461    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:28.003474    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:28.038555    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:28.038566    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:28.052834    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:28.052844    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:28.064447    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:28.064457    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:30.580853    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:35.582999    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:35.583140    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:35.596056    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:35.596149    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:35.607036    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:35.607112    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:35.617249    9116 logs.go:282] 2 containers: [7c37d96e64ed d9576a9c94aa]
	I1211 15:40:35.617320    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:35.627749    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:35.627830    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:35.637917    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:35.638005    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:35.648298    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:35.648376    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:35.658359    9116 logs.go:282] 0 containers: []
	W1211 15:40:35.658376    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:35.658437    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:35.669053    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:35.669070    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:35.669075    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:35.680418    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:35.680430    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:35.716062    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:35.716070    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:35.720355    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:35.720363    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:35.756684    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:35.756695    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:35.770916    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:35.770930    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:35.785112    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:35.785125    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:35.803062    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:35.803075    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:35.814683    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:35.814694    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:35.830408    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:35.830422    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:35.845886    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:35.845896    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:35.857659    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:35.857669    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:35.882646    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:35.882655    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:38.396435    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:43.398736    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:43.398944    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:43.416154    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:43.416261    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:43.428125    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:43.428201    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:43.441406    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:40:43.441494    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:43.451658    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:43.451740    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:43.462543    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:43.462643    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:43.478373    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:43.478450    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:43.488774    9116 logs.go:282] 0 containers: []
	W1211 15:40:43.488787    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:43.488854    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:43.499142    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:43.499160    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:43.499166    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:43.532853    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:43.532867    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:43.537918    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:43.537929    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:43.562523    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:43.562533    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:43.576626    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:43.576641    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:43.591489    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:43.591500    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:43.603674    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:43.603689    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:43.614882    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:40:43.614893    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:40:43.625918    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:40:43.625928    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:40:43.637621    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:43.637631    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:43.655098    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:43.655112    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:43.670148    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:43.670160    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:43.682706    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:43.682718    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:43.717985    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:43.717996    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:43.732154    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:43.732167    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:46.247391    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:51.249999    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:51.250266    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:51.269246    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:51.269346    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:51.283311    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:51.283399    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:51.299223    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:40:51.299304    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:51.310216    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:51.310296    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:51.321221    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:51.321303    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:51.332482    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:51.332559    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:51.342495    9116 logs.go:282] 0 containers: []
	W1211 15:40:51.342511    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:51.342580    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:51.353361    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:51.353381    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:51.353387    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:51.391545    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:40:51.391559    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:40:51.403001    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:51.403017    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:51.415759    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:51.415770    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:51.433273    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:51.433283    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:51.458589    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:51.458598    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:51.494722    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:51.494735    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:51.499543    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:40:51.499552    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:40:51.511560    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:51.511571    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:51.526036    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:51.526048    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:51.556608    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:51.556618    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:51.568286    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:51.568295    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:51.582261    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:51.582273    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:51.596144    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:51.596155    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:51.607962    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:51.607973    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:40:54.121282    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:40:59.123370    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:40:59.123495    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:40:59.135335    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:40:59.135426    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:40:59.150388    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:40:59.150471    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:40:59.161102    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:40:59.161188    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:40:59.171775    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:40:59.171852    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:40:59.182620    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:40:59.182697    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:40:59.192917    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:40:59.193006    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:40:59.203219    9116 logs.go:282] 0 containers: []
	W1211 15:40:59.203229    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:40:59.203296    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:40:59.213384    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:40:59.213402    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:40:59.213408    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:40:59.218212    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:40:59.218219    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:40:59.239541    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:40:59.239551    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:40:59.251002    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:40:59.251011    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:40:59.285419    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:40:59.285428    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:40:59.299618    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:40:59.299628    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:40:59.311504    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:40:59.311517    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:40:59.325501    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:40:59.325516    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:40:59.337181    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:40:59.337191    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:40:59.360379    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:40:59.360397    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:40:59.372864    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:40:59.372875    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:40:59.407932    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:40:59.407945    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:40:59.422115    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:40:59.422129    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:40:59.439931    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:40:59.439942    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:40:59.465135    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:40:59.465143    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:01.979724    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:06.981877    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:06.982005    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:06.992873    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:06.992961    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:07.006406    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:07.006486    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:07.017080    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:07.017155    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:07.027871    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:07.027940    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:07.038228    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:07.038313    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:07.048911    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:07.048984    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:07.059814    9116 logs.go:282] 0 containers: []
	W1211 15:41:07.059832    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:07.059902    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:07.070540    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:07.070557    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:07.070564    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:07.105725    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:07.105735    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:07.119531    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:07.119544    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:07.134416    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:07.134429    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:07.145787    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:07.145798    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:07.160113    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:07.160126    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:07.171894    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:07.171907    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:07.196109    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:07.196122    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:07.207660    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:07.207672    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:07.212523    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:07.212530    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:07.247536    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:07.247547    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:07.258718    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:07.258731    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:07.270337    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:07.270347    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:07.286036    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:07.286052    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:07.298317    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:07.298328    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:09.822675    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:14.824829    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:14.824954    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:14.836278    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:14.836367    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:14.847028    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:14.847104    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:14.857843    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:14.857920    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:14.868311    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:14.868381    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:14.878969    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:14.879038    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:14.889303    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:14.889395    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:14.910486    9116 logs.go:282] 0 containers: []
	W1211 15:41:14.910497    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:14.910559    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:14.921314    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:14.921333    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:14.921342    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:14.949359    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:14.949384    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:14.973656    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:14.973668    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:14.985149    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:14.985163    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:15.002400    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:15.002411    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:15.014112    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:15.014138    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:15.025552    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:15.025564    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:15.037927    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:15.037938    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:15.042140    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:15.042148    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:15.079310    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:15.079321    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:15.093255    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:15.093266    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:15.105071    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:15.105084    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:15.140372    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:15.140379    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:15.151872    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:15.151883    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:15.170357    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:15.170369    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:17.684041    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:22.686163    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:22.686520    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:22.712189    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:22.712329    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:22.730297    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:22.730391    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:22.743743    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:22.743840    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:22.754879    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:22.754956    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:22.765149    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:22.765229    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:22.776766    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:22.776842    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:22.788244    9116 logs.go:282] 0 containers: []
	W1211 15:41:22.788256    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:22.788326    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:22.799515    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:22.799534    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:22.799539    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:22.813995    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:22.814005    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:22.825966    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:22.825979    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:22.837667    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:22.837678    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:22.849593    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:22.849607    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:22.864235    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:22.864246    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:22.882178    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:22.882189    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:22.886366    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:22.886375    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:22.909904    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:22.909916    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:22.921279    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:22.921293    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:22.955607    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:22.955616    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:23.021165    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:23.021176    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:23.035210    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:23.035223    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:23.046633    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:23.046644    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:23.059455    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:23.059464    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:25.573518    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:30.575470    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:30.575726    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:30.594711    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:30.594822    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:30.608891    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:30.608981    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:30.620992    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:30.621081    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:30.634933    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:30.635025    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:30.645659    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:30.645744    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:30.656506    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:30.656583    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:30.666657    9116 logs.go:282] 0 containers: []
	W1211 15:41:30.666678    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:30.666745    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:30.676829    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:30.676847    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:30.676854    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:30.688318    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:30.688329    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:30.700603    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:30.700616    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:30.712115    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:30.712126    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:30.729259    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:30.729270    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:30.740940    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:30.740951    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:30.764772    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:30.764779    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:30.779179    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:30.779190    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:30.816311    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:30.816322    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:30.831278    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:30.831289    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:30.867207    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:30.867215    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:30.879051    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:30.879062    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:30.883883    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:30.883892    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:30.899958    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:30.899971    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:30.917552    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:30.917562    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:33.434363    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:38.434646    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:38.434840    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:38.449383    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:38.449474    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:38.460701    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:38.460781    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:38.471540    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:38.471615    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:38.482917    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:38.482996    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:38.493289    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:38.493369    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:38.503717    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:38.503789    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:38.513689    9116 logs.go:282] 0 containers: []
	W1211 15:41:38.513699    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:38.513765    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:38.524134    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:38.524156    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:38.524163    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:38.559047    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:38.559058    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:38.573162    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:38.573174    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:38.586934    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:38.586945    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:38.598716    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:38.598731    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:38.610253    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:38.610263    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:38.621799    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:38.621810    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:38.657402    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:38.657412    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:38.662042    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:38.662049    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:38.673861    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:38.673872    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:38.687380    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:38.687395    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:38.705616    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:38.705626    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:38.720735    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:38.720747    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:38.746811    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:38.746830    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:38.759545    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:38.759558    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:41.280823    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:46.281526    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:46.281834    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:46.306163    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:46.306300    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:46.325107    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:46.325201    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:46.339154    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:46.339257    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:46.350508    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:46.350587    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:46.360993    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:46.361068    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:46.371105    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:46.371183    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:46.381327    9116 logs.go:282] 0 containers: []
	W1211 15:41:46.381338    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:46.381400    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:46.392204    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:46.392218    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:46.392224    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:46.406217    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:46.406231    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:46.442526    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:46.442537    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:46.447428    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:46.447433    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:46.461487    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:46.461497    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:46.472723    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:46.472751    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:46.484822    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:46.484833    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:46.499536    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:46.499551    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:46.534389    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:46.534404    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:46.552595    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:46.552606    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:46.564312    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:46.564322    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:46.576410    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:46.576421    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:46.589459    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:46.589471    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:46.602200    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:46.602210    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:46.618857    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:46.618868    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:49.147200    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:41:54.149255    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:41:54.149441    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:41:54.164721    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:41:54.164803    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:41:54.175344    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:41:54.175421    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:41:54.185815    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:41:54.185901    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:41:54.196734    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:41:54.196814    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:41:54.206884    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:41:54.206970    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:41:54.218052    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:41:54.218133    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:41:54.232781    9116 logs.go:282] 0 containers: []
	W1211 15:41:54.232796    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:41:54.232866    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:41:54.242992    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:41:54.243011    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:41:54.243017    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:41:54.247563    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:41:54.247572    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:41:54.282927    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:41:54.282937    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:41:54.295272    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:41:54.295284    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:41:54.310835    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:41:54.310846    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:41:54.323070    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:41:54.323080    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:41:54.334569    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:41:54.334580    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:41:54.349448    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:41:54.349462    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:41:54.361026    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:41:54.361040    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:41:54.372855    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:41:54.372865    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:41:54.384174    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:41:54.384185    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:41:54.418743    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:41:54.418760    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:41:54.433432    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:41:54.433447    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:41:54.445257    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:41:54.445269    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:41:54.464085    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:41:54.464093    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:41:56.991127    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:01.993269    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:01.993395    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:02.004534    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:42:02.004619    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:02.015128    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:42:02.015208    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:02.025674    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:42:02.025748    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:02.037292    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:42:02.037370    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:02.049712    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:42:02.049789    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:02.059868    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:42:02.059940    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:02.070273    9116 logs.go:282] 0 containers: []
	W1211 15:42:02.070283    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:02.070344    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:02.080747    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:42:02.080767    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:02.080774    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:02.085333    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:42:02.085340    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:42:02.099677    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:42:02.099689    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:42:02.117074    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:42:02.117084    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:42:02.128857    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:42:02.128867    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:42:02.144111    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:42:02.144125    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:42:02.171008    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:02.171019    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:02.206847    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:42:02.206856    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:42:02.218599    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:42:02.218613    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:42:02.231094    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:42:02.231104    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:42:02.242511    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:02.242524    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:02.266871    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:02.266879    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:02.303877    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:42:02.303889    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:42:02.316187    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:42:02.316199    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:42:02.328523    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:42:02.328535    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:04.844868    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:09.845714    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:09.845841    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:09.856862    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:42:09.856952    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:09.867935    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:42:09.868024    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:09.878686    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:42:09.878769    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:09.889175    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:42:09.889249    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:09.899737    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:42:09.899817    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:09.910249    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:42:09.910322    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:09.923679    9116 logs.go:282] 0 containers: []
	W1211 15:42:09.923691    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:09.923754    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:09.934004    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:42:09.934022    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:09.934027    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:09.938925    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:09.938935    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:09.972746    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:42:09.972759    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:42:09.988010    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:42:09.988022    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:42:10.006924    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:42:10.006936    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:10.018514    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:10.018526    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:10.055036    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:42:10.055046    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:42:10.069617    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:10.069629    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:10.093245    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:42:10.093252    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:42:10.104319    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:42:10.104333    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:42:10.116323    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:42:10.116335    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:42:10.128245    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:42:10.128256    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:42:10.146456    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:42:10.146467    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:42:10.158589    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:42:10.158602    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:42:10.174268    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:42:10.174281    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:42:12.688690    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:17.690866    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:17.690988    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1211 15:42:17.706169    9116 logs.go:282] 1 containers: [29474452e020]
	I1211 15:42:17.706257    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1211 15:42:17.719667    9116 logs.go:282] 1 containers: [920d8038872e]
	I1211 15:42:17.719747    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1211 15:42:17.730523    9116 logs.go:282] 4 containers: [03970ed80ec9 ba1304422de7 7c37d96e64ed d9576a9c94aa]
	I1211 15:42:17.730595    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1211 15:42:17.741112    9116 logs.go:282] 1 containers: [639f1a49e805]
	I1211 15:42:17.741187    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1211 15:42:17.751757    9116 logs.go:282] 1 containers: [c4fecad779d7]
	I1211 15:42:17.751838    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1211 15:42:17.763530    9116 logs.go:282] 1 containers: [c93775f4e6bd]
	I1211 15:42:17.763603    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1211 15:42:17.773167    9116 logs.go:282] 0 containers: []
	W1211 15:42:17.773184    9116 logs.go:284] No container was found matching "kindnet"
	I1211 15:42:17.773245    9116 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1211 15:42:17.783961    9116 logs.go:282] 1 containers: [4491a3ee56ae]
	I1211 15:42:17.783984    9116 logs.go:123] Gathering logs for coredns [7c37d96e64ed] ...
	I1211 15:42:17.783990    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c37d96e64ed"
	I1211 15:42:17.795768    9116 logs.go:123] Gathering logs for kube-scheduler [639f1a49e805] ...
	I1211 15:42:17.795778    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 639f1a49e805"
	I1211 15:42:17.810716    9116 logs.go:123] Gathering logs for kube-controller-manager [c93775f4e6bd] ...
	I1211 15:42:17.810727    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c93775f4e6bd"
	I1211 15:42:17.828067    9116 logs.go:123] Gathering logs for kubelet ...
	I1211 15:42:17.828080    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 15:42:17.863680    9116 logs.go:123] Gathering logs for describe nodes ...
	I1211 15:42:17.863701    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1211 15:42:17.897611    9116 logs.go:123] Gathering logs for etcd [920d8038872e] ...
	I1211 15:42:17.897625    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 920d8038872e"
	I1211 15:42:17.913088    9116 logs.go:123] Gathering logs for coredns [03970ed80ec9] ...
	I1211 15:42:17.913098    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03970ed80ec9"
	I1211 15:42:17.924464    9116 logs.go:123] Gathering logs for coredns [ba1304422de7] ...
	I1211 15:42:17.924477    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1304422de7"
	I1211 15:42:17.935772    9116 logs.go:123] Gathering logs for coredns [d9576a9c94aa] ...
	I1211 15:42:17.935786    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9576a9c94aa"
	I1211 15:42:17.951735    9116 logs.go:123] Gathering logs for storage-provisioner [4491a3ee56ae] ...
	I1211 15:42:17.951747    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4491a3ee56ae"
	I1211 15:42:17.963445    9116 logs.go:123] Gathering logs for Docker ...
	I1211 15:42:17.963457    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1211 15:42:17.986654    9116 logs.go:123] Gathering logs for container status ...
	I1211 15:42:17.986663    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 15:42:17.998326    9116 logs.go:123] Gathering logs for dmesg ...
	I1211 15:42:17.998338    9116 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 15:42:18.003500    9116 logs.go:123] Gathering logs for kube-proxy [c4fecad779d7] ...
	I1211 15:42:18.003509    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4fecad779d7"
	I1211 15:42:18.016538    9116 logs.go:123] Gathering logs for kube-apiserver [29474452e020] ...
	I1211 15:42:18.016553    9116 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29474452e020"
	I1211 15:42:20.534616    9116 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1211 15:42:25.536715    9116 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1211 15:42:25.541153    9116 out.go:201] 
	W1211 15:42:25.544897    9116 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1211 15:42:25.544903    9116 out.go:270] * 
	* 
	W1211 15:42:25.545668    9116 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:42:25.557010    9116 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-684000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (583.73s)

                                                
                                    
x
+
TestPause/serial/Start (9.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-860000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-860000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.921843958s)

                                                
                                                
-- stdout --
	* [pause-860000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-860000" primary control-plane node in "pause-860000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-860000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-860000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-860000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-860000 -n pause-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-860000 -n pause-860000: exit status 7 (66.369792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-237000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-237000 --driver=qemu2 : exit status 80 (9.946163875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-237000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-237000" primary control-plane node in "NoKubernetes-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-237000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-237000 -n NoKubernetes-237000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-237000 -n NoKubernetes-237000: exit status 7 (72.402375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-237000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-237000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-237000 --no-kubernetes --driver=qemu2 : exit status 80 (5.258123667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-237000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-237000
	* Restarting existing qemu2 VM for "NoKubernetes-237000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-237000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-237000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-237000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-237000 -n NoKubernetes-237000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-237000 -n NoKubernetes-237000: exit status 7 (40.37775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-237000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-237000 --no-kubernetes --driver=qemu2 
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20083
- KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1425334205/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-237000 --no-kubernetes --driver=qemu2 : exit status 80 (5.423300208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-237000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-237000
	* Restarting existing qemu2 VM for "NoKubernetes-237000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-237000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-237000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-237000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-237000 -n NoKubernetes-237000
I1211 15:43:30.090210    7135 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10728d700 0x10728d700 0x10728d700 0x10728d700 0x10728d700 0x10728d700 0x10728d700] Decompressors:map[bz2:0x14000897440 gz:0x14000897448 tar:0x140008973f0 tar.bz2:0x14000897400 tar.gz:0x14000897410 tar.xz:0x14000897420 tar.zst:0x14000897430 tbz2:0x14000897400 tgz:0x14000897410 txz:0x14000897420 tzst:0x14000897430 xz:0x14000897450 zip:0x14000897460 zst:0x14000897458] Getters:map[file:0x140058b5e20 http:0x14000739040 https:0x14000739090] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1211 15:43:30.090276    7135 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/001/docker-machine-driver-hyperkit
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-237000 -n NoKubernetes-237000: exit status 7 (74.379542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-237000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.50s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.36s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20083
- KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2276614942/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-237000 --driver=qemu2 
I1211 15:43:35.302582    7135 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1211 15:43:35.302604    7135 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1211 15:43:35.302654    7135 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1211 15:43:35.302688    7135 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/002/docker-machine-driver-hyperkit
I1211 15:43:35.704675    7135 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10728d700 0x10728d700 0x10728d700 0x10728d700 0x10728d700 0x10728d700 0x10728d700] Decompressors:map[bz2:0x14000897440 gz:0x14000897448 tar:0x140008973f0 tar.bz2:0x14000897400 tar.gz:0x14000897410 tar.xz:0x14000897420 tar.zst:0x14000897430 tbz2:0x14000897400 tgz:0x14000897410 txz:0x14000897420 tzst:0x14000897430 xz:0x14000897450 zip:0x14000897460 zst:0x14000897458] Getters:map[file:0x1400071fbf0 http:0x140009209b0 https:0x14000920a00] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1211 15:43:35.704792    7135 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/002/docker-machine-driver-hyperkit
I1211 15:43:38.391130    7135 install.go:79] stdout: 
W1211 15:43:38.391291    7135 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1211 15:43:38.391321    7135 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/002/docker-machine-driver-hyperkit]
I1211 15:43:38.407846    7135 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/002/docker-machine-driver-hyperkit]
I1211 15:43:38.421163    7135 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/002/docker-machine-driver-hyperkit]
I1211 15:43:38.431823    7135 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/002/docker-machine-driver-hyperkit]
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-237000 --driver=qemu2 : exit status 80 (7.56326875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-237000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-237000
	* Restarting existing qemu2 VM for "NoKubernetes-237000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-237000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-237000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-237000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-237000 -n NoKubernetes-237000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-237000 -n NoKubernetes-237000: exit status 7 (65.253166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-237000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (7.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.83904425s)

                                                
                                                
-- stdout --
	* [auto-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-736000" primary control-plane node in "auto-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:44:08.972288    9587 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:44:08.972441    9587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:44:08.972444    9587 out.go:358] Setting ErrFile to fd 2...
	I1211 15:44:08.972446    9587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:44:08.972571    9587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:44:08.973720    9587 out.go:352] Setting JSON to false
	I1211 15:44:08.991216    9587 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6218,"bootTime":1733954430,"procs":531,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:44:08.991297    9587 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:44:08.997168    9587 out.go:177] * [auto-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:44:09.005381    9587 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:44:09.005433    9587 notify.go:220] Checking for updates...
	I1211 15:44:09.014384    9587 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:44:09.015887    9587 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:44:09.020344    9587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:44:09.023357    9587 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:44:09.024805    9587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:44:09.028715    9587 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:44:09.028796    9587 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:44:09.028855    9587 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:44:09.033348    9587 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:44:09.039330    9587 start.go:297] selected driver: qemu2
	I1211 15:44:09.039334    9587 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:44:09.039343    9587 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:44:09.041932    9587 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:44:09.045357    9587 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:44:09.048355    9587 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:44:09.048372    9587 cni.go:84] Creating CNI manager for ""
	I1211 15:44:09.048393    9587 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:44:09.048397    9587 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:44:09.048425    9587 start.go:340] cluster config:
	{Name:auto-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:44:09.053142    9587 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:44:09.061383    9587 out.go:177] * Starting "auto-736000" primary control-plane node in "auto-736000" cluster
	I1211 15:44:09.065300    9587 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:44:09.065317    9587 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:44:09.065331    9587 cache.go:56] Caching tarball of preloaded images
	I1211 15:44:09.065422    9587 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:44:09.065428    9587 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:44:09.065490    9587 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/auto-736000/config.json ...
	I1211 15:44:09.065501    9587 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/auto-736000/config.json: {Name:mk8746b17d8f1e8f824f4d606805d34649559908 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:44:09.065983    9587 start.go:360] acquireMachinesLock for auto-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:44:09.066038    9587 start.go:364] duration metric: took 48.958µs to acquireMachinesLock for "auto-736000"
	I1211 15:44:09.066049    9587 start.go:93] Provisioning new machine with config: &{Name:auto-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:44:09.066088    9587 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:44:09.071348    9587 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:44:09.088963    9587 start.go:159] libmachine.API.Create for "auto-736000" (driver="qemu2")
	I1211 15:44:09.088996    9587 client.go:168] LocalClient.Create starting
	I1211 15:44:09.089071    9587 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:44:09.089110    9587 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:09.089122    9587 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:09.089163    9587 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:44:09.089193    9587 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:09.089205    9587 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:09.089654    9587 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:44:09.249211    9587 main.go:141] libmachine: Creating SSH key...
	I1211 15:44:09.332224    9587 main.go:141] libmachine: Creating Disk image...
	I1211 15:44:09.332230    9587 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:44:09.332455    9587 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/disk.qcow2
	I1211 15:44:09.342269    9587 main.go:141] libmachine: STDOUT: 
	I1211 15:44:09.342291    9587 main.go:141] libmachine: STDERR: 
	I1211 15:44:09.342366    9587 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/disk.qcow2 +20000M
	I1211 15:44:09.350909    9587 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:44:09.350926    9587 main.go:141] libmachine: STDERR: 
	I1211 15:44:09.350945    9587 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/disk.qcow2
	I1211 15:44:09.350950    9587 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:44:09.350962    9587 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:44:09.350996    9587 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a1:2b:3e:bd:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/disk.qcow2
	I1211 15:44:09.352833    9587 main.go:141] libmachine: STDOUT: 
	I1211 15:44:09.352848    9587 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:44:09.352866    9587 client.go:171] duration metric: took 263.871875ms to LocalClient.Create
	I1211 15:44:11.354983    9587 start.go:128] duration metric: took 2.288944375s to createHost
	I1211 15:44:11.355049    9587 start.go:83] releasing machines lock for "auto-736000", held for 2.28907025s
	W1211 15:44:11.355163    9587 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:44:11.366363    9587 out.go:177] * Deleting "auto-736000" in qemu2 ...
	W1211 15:44:11.395219    9587 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:44:11.395244    9587 start.go:729] Will try again in 5 seconds ...
	I1211 15:44:16.397288    9587 start.go:360] acquireMachinesLock for auto-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:44:16.397809    9587 start.go:364] duration metric: took 407.834µs to acquireMachinesLock for "auto-736000"
	I1211 15:44:16.397912    9587 start.go:93] Provisioning new machine with config: &{Name:auto-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:44:16.398337    9587 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:44:16.403919    9587 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:44:16.452316    9587 start.go:159] libmachine.API.Create for "auto-736000" (driver="qemu2")
	I1211 15:44:16.452376    9587 client.go:168] LocalClient.Create starting
	I1211 15:44:16.452555    9587 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:44:16.452646    9587 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:16.452668    9587 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:16.452727    9587 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:44:16.452788    9587 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:16.452801    9587 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:16.453673    9587 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:44:16.630005    9587 main.go:141] libmachine: Creating SSH key...
	I1211 15:44:16.708411    9587 main.go:141] libmachine: Creating Disk image...
	I1211 15:44:16.708417    9587 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:44:16.708633    9587 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/disk.qcow2
	I1211 15:44:16.718431    9587 main.go:141] libmachine: STDOUT: 
	I1211 15:44:16.718447    9587 main.go:141] libmachine: STDERR: 
	I1211 15:44:16.718528    9587 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/disk.qcow2 +20000M
	I1211 15:44:16.727128    9587 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:44:16.727143    9587 main.go:141] libmachine: STDERR: 
	I1211 15:44:16.727154    9587 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/disk.qcow2
	I1211 15:44:16.727159    9587 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:44:16.727174    9587 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:44:16.727221    9587 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:d9:5e:e6:b3:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/auto-736000/disk.qcow2
	I1211 15:44:16.729062    9587 main.go:141] libmachine: STDOUT: 
	I1211 15:44:16.729076    9587 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:44:16.729089    9587 client.go:171] duration metric: took 276.717208ms to LocalClient.Create
	I1211 15:44:18.731240    9587 start.go:128] duration metric: took 2.332923291s to createHost
	I1211 15:44:18.731389    9587 start.go:83] releasing machines lock for "auto-736000", held for 2.333549125s
	W1211 15:44:18.731785    9587 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:44:18.745455    9587 out.go:201] 
	W1211 15:44:18.749699    9587 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:44:18.749722    9587 out.go:270] * 
	* 
	W1211 15:44:18.752578    9587 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:44:18.764500    9587 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.962869375s)

                                                
                                                
-- stdout --
	* [kindnet-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-736000" primary control-plane node in "kindnet-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:44:21.162770    9698 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:44:21.162924    9698 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:44:21.162927    9698 out.go:358] Setting ErrFile to fd 2...
	I1211 15:44:21.162930    9698 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:44:21.163073    9698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:44:21.164226    9698 out.go:352] Setting JSON to false
	I1211 15:44:21.181910    9698 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6231,"bootTime":1733954430,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:44:21.181981    9698 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:44:21.188533    9698 out.go:177] * [kindnet-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:44:21.195492    9698 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:44:21.195548    9698 notify.go:220] Checking for updates...
	I1211 15:44:21.203446    9698 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:44:21.206475    9698 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:44:21.210475    9698 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:44:21.213466    9698 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:44:21.216451    9698 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:44:21.219825    9698 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:44:21.219906    9698 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:44:21.219971    9698 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:44:21.223414    9698 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:44:21.230429    9698 start.go:297] selected driver: qemu2
	I1211 15:44:21.230436    9698 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:44:21.230442    9698 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:44:21.232981    9698 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:44:21.237484    9698 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:44:21.240595    9698 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:44:21.240629    9698 cni.go:84] Creating CNI manager for "kindnet"
	I1211 15:44:21.240632    9698 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 15:44:21.240662    9698 start.go:340] cluster config:
	{Name:kindnet-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:44:21.245236    9698 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:44:21.253400    9698 out.go:177] * Starting "kindnet-736000" primary control-plane node in "kindnet-736000" cluster
	I1211 15:44:21.257463    9698 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:44:21.257481    9698 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:44:21.257492    9698 cache.go:56] Caching tarball of preloaded images
	I1211 15:44:21.257585    9698 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:44:21.257592    9698 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:44:21.257659    9698 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/kindnet-736000/config.json ...
	I1211 15:44:21.257671    9698 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/kindnet-736000/config.json: {Name:mkb282297ca4840f5a5b3bb3b73ceddbeea716e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:44:21.258122    9698 start.go:360] acquireMachinesLock for kindnet-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:44:21.258186    9698 start.go:364] duration metric: took 52.25µs to acquireMachinesLock for "kindnet-736000"
	I1211 15:44:21.258197    9698 start.go:93] Provisioning new machine with config: &{Name:kindnet-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:44:21.258223    9698 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:44:21.262418    9698 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:44:21.280395    9698 start.go:159] libmachine.API.Create for "kindnet-736000" (driver="qemu2")
	I1211 15:44:21.280423    9698 client.go:168] LocalClient.Create starting
	I1211 15:44:21.280505    9698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:44:21.280551    9698 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:21.280570    9698 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:21.280608    9698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:44:21.280642    9698 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:21.280652    9698 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:21.281111    9698 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:44:21.441327    9698 main.go:141] libmachine: Creating SSH key...
	I1211 15:44:21.523083    9698 main.go:141] libmachine: Creating Disk image...
	I1211 15:44:21.523088    9698 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:44:21.523315    9698 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/disk.qcow2
	I1211 15:44:21.533057    9698 main.go:141] libmachine: STDOUT: 
	I1211 15:44:21.533077    9698 main.go:141] libmachine: STDERR: 
	I1211 15:44:21.533135    9698 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/disk.qcow2 +20000M
	I1211 15:44:21.541648    9698 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:44:21.541661    9698 main.go:141] libmachine: STDERR: 
	I1211 15:44:21.541677    9698 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/disk.qcow2
	I1211 15:44:21.541686    9698 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:44:21.541700    9698 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:44:21.541731    9698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ec:dc:4c:79:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/disk.qcow2
	I1211 15:44:21.543578    9698 main.go:141] libmachine: STDOUT: 
	I1211 15:44:21.543590    9698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:44:21.543608    9698 client.go:171] duration metric: took 263.187125ms to LocalClient.Create
	I1211 15:44:23.545743    9698 start.go:128] duration metric: took 2.287568708s to createHost
	I1211 15:44:23.545850    9698 start.go:83] releasing machines lock for "kindnet-736000", held for 2.287687042s
	W1211 15:44:23.545908    9698 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:44:23.564237    9698 out.go:177] * Deleting "kindnet-736000" in qemu2 ...
	W1211 15:44:23.593149    9698 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:44:23.593173    9698 start.go:729] Will try again in 5 seconds ...
	I1211 15:44:28.595261    9698 start.go:360] acquireMachinesLock for kindnet-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:44:28.595752    9698 start.go:364] duration metric: took 405.834µs to acquireMachinesLock for "kindnet-736000"
	I1211 15:44:28.595867    9698 start.go:93] Provisioning new machine with config: &{Name:kindnet-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:44:28.596144    9698 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:44:28.613986    9698 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:44:28.663342    9698 start.go:159] libmachine.API.Create for "kindnet-736000" (driver="qemu2")
	I1211 15:44:28.663397    9698 client.go:168] LocalClient.Create starting
	I1211 15:44:28.663538    9698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:44:28.663613    9698 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:28.663630    9698 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:28.663716    9698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:44:28.663775    9698 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:28.663787    9698 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:28.664503    9698 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:44:28.835522    9698 main.go:141] libmachine: Creating SSH key...
	I1211 15:44:29.021640    9698 main.go:141] libmachine: Creating Disk image...
	I1211 15:44:29.021647    9698 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:44:29.021893    9698 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/disk.qcow2
	I1211 15:44:29.032173    9698 main.go:141] libmachine: STDOUT: 
	I1211 15:44:29.032189    9698 main.go:141] libmachine: STDERR: 
	I1211 15:44:29.032257    9698 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/disk.qcow2 +20000M
	I1211 15:44:29.040630    9698 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:44:29.040645    9698 main.go:141] libmachine: STDERR: 
	I1211 15:44:29.040658    9698 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/disk.qcow2
	I1211 15:44:29.040663    9698 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:44:29.040672    9698 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:44:29.040699    9698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:18:ac:10:fc:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kindnet-736000/disk.qcow2
	I1211 15:44:29.042473    9698 main.go:141] libmachine: STDOUT: 
	I1211 15:44:29.042486    9698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:44:29.042499    9698 client.go:171] duration metric: took 379.106292ms to LocalClient.Create
	I1211 15:44:31.044601    9698 start.go:128] duration metric: took 2.4484715s to createHost
	I1211 15:44:31.044658    9698 start.go:83] releasing machines lock for "kindnet-736000", held for 2.448958333s
	W1211 15:44:31.045000    9698 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:44:31.060724    9698 out.go:201] 
	W1211 15:44:31.065735    9698 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:44:31.065760    9698 out.go:270] * 
	* 
	W1211 15:44:31.068454    9698 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:44:31.078647    9698 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.913002542s)

                                                
                                                
-- stdout --
	* [calico-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-736000" primary control-plane node in "calico-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:44:33.577096    9811 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:44:33.577230    9811 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:44:33.577237    9811 out.go:358] Setting ErrFile to fd 2...
	I1211 15:44:33.577247    9811 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:44:33.577393    9811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:44:33.578488    9811 out.go:352] Setting JSON to false
	I1211 15:44:33.596297    9811 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6243,"bootTime":1733954430,"procs":531,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:44:33.596379    9811 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:44:33.602975    9811 out.go:177] * [calico-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:44:33.610920    9811 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:44:33.610961    9811 notify.go:220] Checking for updates...
	I1211 15:44:33.617738    9811 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:44:33.621913    9811 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:44:33.625920    9811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:44:33.628844    9811 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:44:33.631899    9811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:44:33.635296    9811 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:44:33.635368    9811 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:44:33.635414    9811 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:44:33.638897    9811 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:44:33.645940    9811 start.go:297] selected driver: qemu2
	I1211 15:44:33.645945    9811 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:44:33.645956    9811 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:44:33.648596    9811 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:44:33.651910    9811 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:44:33.655979    9811 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:44:33.656001    9811 cni.go:84] Creating CNI manager for "calico"
	I1211 15:44:33.656010    9811 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1211 15:44:33.656053    9811 start.go:340] cluster config:
	{Name:calico-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:44:33.660754    9811 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:44:33.668921    9811 out.go:177] * Starting "calico-736000" primary control-plane node in "calico-736000" cluster
	I1211 15:44:33.672907    9811 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:44:33.672923    9811 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:44:33.672930    9811 cache.go:56] Caching tarball of preloaded images
	I1211 15:44:33.673024    9811 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:44:33.673030    9811 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:44:33.673088    9811 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/calico-736000/config.json ...
	I1211 15:44:33.673102    9811 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/calico-736000/config.json: {Name:mkf62b8479b78a53eac1fcecc3ae86e74936d499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:44:33.673468    9811 start.go:360] acquireMachinesLock for calico-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:44:33.673519    9811 start.go:364] duration metric: took 44.875µs to acquireMachinesLock for "calico-736000"
	I1211 15:44:33.673530    9811 start.go:93] Provisioning new machine with config: &{Name:calico-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:44:33.673570    9811 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:44:33.681948    9811 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:44:33.699816    9811 start.go:159] libmachine.API.Create for "calico-736000" (driver="qemu2")
	I1211 15:44:33.699844    9811 client.go:168] LocalClient.Create starting
	I1211 15:44:33.699921    9811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:44:33.699958    9811 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:33.699970    9811 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:33.700015    9811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:44:33.700048    9811 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:33.700059    9811 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:33.700510    9811 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:44:33.859967    9811 main.go:141] libmachine: Creating SSH key...
	I1211 15:44:33.946180    9811 main.go:141] libmachine: Creating Disk image...
	I1211 15:44:33.946186    9811 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:44:33.946398    9811 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/disk.qcow2
	I1211 15:44:33.956203    9811 main.go:141] libmachine: STDOUT: 
	I1211 15:44:33.956233    9811 main.go:141] libmachine: STDERR: 
	I1211 15:44:33.956294    9811 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/disk.qcow2 +20000M
	I1211 15:44:33.964733    9811 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:44:33.964748    9811 main.go:141] libmachine: STDERR: 
	I1211 15:44:33.964765    9811 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/disk.qcow2
	I1211 15:44:33.964770    9811 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:44:33.964782    9811 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:44:33.964811    9811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:ab:49:37:2a:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/disk.qcow2
	I1211 15:44:33.966637    9811 main.go:141] libmachine: STDOUT: 
	I1211 15:44:33.966652    9811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:44:33.966672    9811 client.go:171] duration metric: took 266.830167ms to LocalClient.Create
	I1211 15:44:35.968813    9811 start.go:128] duration metric: took 2.295294125s to createHost
	I1211 15:44:35.968892    9811 start.go:83] releasing machines lock for "calico-736000", held for 2.29543325s
	W1211 15:44:35.968974    9811 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:44:35.981319    9811 out.go:177] * Deleting "calico-736000" in qemu2 ...
	W1211 15:44:36.013679    9811 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:44:36.013782    9811 start.go:729] Will try again in 5 seconds ...
	I1211 15:44:41.015848    9811 start.go:360] acquireMachinesLock for calico-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:44:41.016353    9811 start.go:364] duration metric: took 422.333µs to acquireMachinesLock for "calico-736000"
	I1211 15:44:41.016468    9811 start.go:93] Provisioning new machine with config: &{Name:calico-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:44:41.016774    9811 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:44:41.034398    9811 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:44:41.084285    9811 start.go:159] libmachine.API.Create for "calico-736000" (driver="qemu2")
	I1211 15:44:41.084359    9811 client.go:168] LocalClient.Create starting
	I1211 15:44:41.084505    9811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:44:41.084582    9811 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:41.084601    9811 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:41.084671    9811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:44:41.084731    9811 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:41.084751    9811 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:41.085424    9811 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:44:41.256819    9811 main.go:141] libmachine: Creating SSH key...
	I1211 15:44:41.386749    9811 main.go:141] libmachine: Creating Disk image...
	I1211 15:44:41.386756    9811 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:44:41.386980    9811 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/disk.qcow2
	I1211 15:44:41.396990    9811 main.go:141] libmachine: STDOUT: 
	I1211 15:44:41.397013    9811 main.go:141] libmachine: STDERR: 
	I1211 15:44:41.397074    9811 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/disk.qcow2 +20000M
	I1211 15:44:41.405537    9811 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:44:41.405552    9811 main.go:141] libmachine: STDERR: 
	I1211 15:44:41.405568    9811 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/disk.qcow2
	I1211 15:44:41.405573    9811 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:44:41.405583    9811 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:44:41.405609    9811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:d5:b1:c3:c3:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/calico-736000/disk.qcow2
	I1211 15:44:41.407396    9811 main.go:141] libmachine: STDOUT: 
	I1211 15:44:41.407410    9811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:44:41.407430    9811 client.go:171] duration metric: took 323.064ms to LocalClient.Create
	I1211 15:44:43.409545    9811 start.go:128] duration metric: took 2.392817834s to createHost
	I1211 15:44:43.409603    9811 start.go:83] releasing machines lock for "calico-736000", held for 2.393297125s
	W1211 15:44:43.409937    9811 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:44:43.425636    9811 out.go:201] 
	W1211 15:44:43.429694    9811 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:44:43.429742    9811 out.go:270] * 
	* 
	W1211 15:44:43.432054    9811 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:44:43.443572    9811 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.915045625s)

                                                
                                                
-- stdout --
	* [custom-flannel-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-736000" primary control-plane node in "custom-flannel-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:44:46.070114    9928 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:44:46.070268    9928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:44:46.070271    9928 out.go:358] Setting ErrFile to fd 2...
	I1211 15:44:46.070274    9928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:44:46.070395    9928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:44:46.071532    9928 out.go:352] Setting JSON to false
	I1211 15:44:46.089234    9928 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6256,"bootTime":1733954430,"procs":531,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:44:46.089347    9928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:44:46.094652    9928 out.go:177] * [custom-flannel-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:44:46.101566    9928 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:44:46.101616    9928 notify.go:220] Checking for updates...
	I1211 15:44:46.108458    9928 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:44:46.111456    9928 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:44:46.115569    9928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:44:46.118480    9928 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:44:46.121561    9928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:44:46.124918    9928 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:44:46.124995    9928 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:44:46.125040    9928 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:44:46.129530    9928 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:44:46.136531    9928 start.go:297] selected driver: qemu2
	I1211 15:44:46.136536    9928 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:44:46.136542    9928 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:44:46.139043    9928 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:44:46.143529    9928 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:44:46.146576    9928 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:44:46.146597    9928 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1211 15:44:46.146605    9928 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1211 15:44:46.146633    9928 start.go:340] cluster config:
	{Name:custom-flannel-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:44:46.151273    9928 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:44:46.158492    9928 out.go:177] * Starting "custom-flannel-736000" primary control-plane node in "custom-flannel-736000" cluster
	I1211 15:44:46.162538    9928 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:44:46.162556    9928 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:44:46.162564    9928 cache.go:56] Caching tarball of preloaded images
	I1211 15:44:46.162652    9928 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:44:46.162659    9928 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:44:46.162722    9928 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/custom-flannel-736000/config.json ...
	I1211 15:44:46.162733    9928 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/custom-flannel-736000/config.json: {Name:mk630fdff45aa4ab0c7faaf210fd4354b258dbff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:44:46.163191    9928 start.go:360] acquireMachinesLock for custom-flannel-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:44:46.163244    9928 start.go:364] duration metric: took 45.125µs to acquireMachinesLock for "custom-flannel-736000"
	I1211 15:44:46.163257    9928 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:44:46.163291    9928 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:44:46.171507    9928 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:44:46.189719    9928 start.go:159] libmachine.API.Create for "custom-flannel-736000" (driver="qemu2")
	I1211 15:44:46.189753    9928 client.go:168] LocalClient.Create starting
	I1211 15:44:46.189838    9928 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:44:46.189875    9928 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:46.189889    9928 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:46.189925    9928 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:44:46.189955    9928 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:46.189962    9928 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:46.190365    9928 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:44:46.352268    9928 main.go:141] libmachine: Creating SSH key...
	I1211 15:44:46.468585    9928 main.go:141] libmachine: Creating Disk image...
	I1211 15:44:46.468591    9928 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:44:46.468819    9928 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/disk.qcow2
	I1211 15:44:46.479085    9928 main.go:141] libmachine: STDOUT: 
	I1211 15:44:46.479113    9928 main.go:141] libmachine: STDERR: 
	I1211 15:44:46.479174    9928 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/disk.qcow2 +20000M
	I1211 15:44:46.487554    9928 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:44:46.487569    9928 main.go:141] libmachine: STDERR: 
	I1211 15:44:46.487588    9928 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/disk.qcow2
	I1211 15:44:46.487593    9928 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:44:46.487606    9928 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:44:46.487641    9928 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:15:fb:7d:e9:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/disk.qcow2
	I1211 15:44:46.489426    9928 main.go:141] libmachine: STDOUT: 
	I1211 15:44:46.489439    9928 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:44:46.489463    9928 client.go:171] duration metric: took 299.712666ms to LocalClient.Create
	I1211 15:44:48.491572    9928 start.go:128] duration metric: took 2.328326792s to createHost
	I1211 15:44:48.491640    9928 start.go:83] releasing machines lock for "custom-flannel-736000", held for 2.328456708s
	W1211 15:44:48.491748    9928 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:44:48.505888    9928 out.go:177] * Deleting "custom-flannel-736000" in qemu2 ...
	W1211 15:44:48.537278    9928 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:44:48.537306    9928 start.go:729] Will try again in 5 seconds ...
	I1211 15:44:53.539393    9928 start.go:360] acquireMachinesLock for custom-flannel-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:44:53.540041    9928 start.go:364] duration metric: took 485.375µs to acquireMachinesLock for "custom-flannel-736000"
	I1211 15:44:53.540242    9928 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:44:53.540476    9928 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:44:53.557510    9928 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:44:53.606518    9928 start.go:159] libmachine.API.Create for "custom-flannel-736000" (driver="qemu2")
	I1211 15:44:53.606577    9928 client.go:168] LocalClient.Create starting
	I1211 15:44:53.606729    9928 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:44:53.606817    9928 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:53.606841    9928 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:53.606913    9928 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:44:53.606978    9928 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:53.606991    9928 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:53.607770    9928 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:44:53.777147    9928 main.go:141] libmachine: Creating SSH key...
	I1211 15:44:53.877873    9928 main.go:141] libmachine: Creating Disk image...
	I1211 15:44:53.877879    9928 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:44:53.878092    9928 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/disk.qcow2
	I1211 15:44:53.888136    9928 main.go:141] libmachine: STDOUT: 
	I1211 15:44:53.888163    9928 main.go:141] libmachine: STDERR: 
	I1211 15:44:53.888221    9928 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/disk.qcow2 +20000M
	I1211 15:44:53.896618    9928 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:44:53.896634    9928 main.go:141] libmachine: STDERR: 
	I1211 15:44:53.896648    9928 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/disk.qcow2
	I1211 15:44:53.896651    9928 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:44:53.896661    9928 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:44:53.896687    9928 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:7d:fb:f3:b4:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/custom-flannel-736000/disk.qcow2
	I1211 15:44:53.898470    9928 main.go:141] libmachine: STDOUT: 
	I1211 15:44:53.898483    9928 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:44:53.898495    9928 client.go:171] duration metric: took 291.920625ms to LocalClient.Create
	I1211 15:44:55.900616    9928 start.go:128] duration metric: took 2.360171334s to createHost
	I1211 15:44:55.900693    9928 start.go:83] releasing machines lock for "custom-flannel-736000", held for 2.360680166s
	W1211 15:44:55.901099    9928 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:44:55.918918    9928 out.go:201] 
	W1211 15:44:55.923844    9928 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:44:55.923873    9928 out.go:270] * 
	* 
	W1211 15:44:55.926489    9928 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:44:55.937764    9928 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.909509042s)

                                                
                                                
-- stdout --
	* [false-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-736000" primary control-plane node in "false-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:44:58.499656   10050 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:44:58.499813   10050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:44:58.499817   10050 out.go:358] Setting ErrFile to fd 2...
	I1211 15:44:58.499819   10050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:44:58.499973   10050 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:44:58.501104   10050 out.go:352] Setting JSON to false
	I1211 15:44:58.518607   10050 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6268,"bootTime":1733954430,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:44:58.518677   10050 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:44:58.525515   10050 out.go:177] * [false-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:44:58.533371   10050 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:44:58.533402   10050 notify.go:220] Checking for updates...
	I1211 15:44:58.540436   10050 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:44:58.544401   10050 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:44:58.548444   10050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:44:58.551504   10050 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:44:58.554476   10050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:44:58.557848   10050 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:44:58.557933   10050 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:44:58.557990   10050 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:44:58.561461   10050 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:44:58.568417   10050 start.go:297] selected driver: qemu2
	I1211 15:44:58.568423   10050 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:44:58.568428   10050 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:44:58.571008   10050 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:44:58.575520   10050 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:44:58.579488   10050 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:44:58.579510   10050 cni.go:84] Creating CNI manager for "false"
	I1211 15:44:58.579536   10050 start.go:340] cluster config:
	{Name:false-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:false-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:44:58.584312   10050 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:44:58.591391   10050 out.go:177] * Starting "false-736000" primary control-plane node in "false-736000" cluster
	I1211 15:44:58.595457   10050 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:44:58.595473   10050 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:44:58.595487   10050 cache.go:56] Caching tarball of preloaded images
	I1211 15:44:58.595574   10050 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:44:58.595581   10050 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:44:58.595634   10050 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/false-736000/config.json ...
	I1211 15:44:58.595646   10050 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/false-736000/config.json: {Name:mk78a52a805140b4a1983c62052a923787b71510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:44:58.596123   10050 start.go:360] acquireMachinesLock for false-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:44:58.596174   10050 start.go:364] duration metric: took 44.459µs to acquireMachinesLock for "false-736000"
	I1211 15:44:58.596186   10050 start.go:93] Provisioning new machine with config: &{Name:false-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:44:58.596212   10050 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:44:58.604466   10050 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:44:58.623239   10050 start.go:159] libmachine.API.Create for "false-736000" (driver="qemu2")
	I1211 15:44:58.623265   10050 client.go:168] LocalClient.Create starting
	I1211 15:44:58.623341   10050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:44:58.623379   10050 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:58.623389   10050 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:58.623431   10050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:44:58.623462   10050 main.go:141] libmachine: Decoding PEM data...
	I1211 15:44:58.623471   10050 main.go:141] libmachine: Parsing certificate...
	I1211 15:44:58.623943   10050 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:44:58.784454   10050 main.go:141] libmachine: Creating SSH key...
	I1211 15:44:58.855579   10050 main.go:141] libmachine: Creating Disk image...
	I1211 15:44:58.855585   10050 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:44:58.855814   10050 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/disk.qcow2
	I1211 15:44:58.865887   10050 main.go:141] libmachine: STDOUT: 
	I1211 15:44:58.865909   10050 main.go:141] libmachine: STDERR: 
	I1211 15:44:58.865966   10050 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/disk.qcow2 +20000M
	I1211 15:44:58.874398   10050 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:44:58.874415   10050 main.go:141] libmachine: STDERR: 
	I1211 15:44:58.874437   10050 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/disk.qcow2
	I1211 15:44:58.874444   10050 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:44:58.874455   10050 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:44:58.874485   10050 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:fb:b4:8d:f0:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/disk.qcow2
	I1211 15:44:58.876325   10050 main.go:141] libmachine: STDOUT: 
	I1211 15:44:58.876343   10050 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:44:58.876366   10050 client.go:171] duration metric: took 253.099917ms to LocalClient.Create
	I1211 15:45:00.878449   10050 start.go:128] duration metric: took 2.282287542s to createHost
	I1211 15:45:00.878493   10050 start.go:83] releasing machines lock for "false-736000", held for 2.282382375s
	W1211 15:45:00.878524   10050 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:00.892457   10050 out.go:177] * Deleting "false-736000" in qemu2 ...
	W1211 15:45:00.923027   10050 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:00.923059   10050 start.go:729] Will try again in 5 seconds ...
	I1211 15:45:05.925154   10050 start.go:360] acquireMachinesLock for false-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:45:05.925777   10050 start.go:364] duration metric: took 435.25µs to acquireMachinesLock for "false-736000"
	I1211 15:45:05.925910   10050 start.go:93] Provisioning new machine with config: &{Name:false-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:45:05.926161   10050 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:45:05.937874   10050 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:45:05.987121   10050 start.go:159] libmachine.API.Create for "false-736000" (driver="qemu2")
	I1211 15:45:05.987185   10050 client.go:168] LocalClient.Create starting
	I1211 15:45:05.987325   10050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:45:05.987397   10050 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:05.987411   10050 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:05.987479   10050 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:45:05.987540   10050 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:05.987557   10050 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:05.988337   10050 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:45:06.159482   10050 main.go:141] libmachine: Creating SSH key...
	I1211 15:45:06.305035   10050 main.go:141] libmachine: Creating Disk image...
	I1211 15:45:06.305042   10050 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:45:06.305284   10050 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/disk.qcow2
	I1211 15:45:06.315819   10050 main.go:141] libmachine: STDOUT: 
	I1211 15:45:06.315851   10050 main.go:141] libmachine: STDERR: 
	I1211 15:45:06.315908   10050 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/disk.qcow2 +20000M
	I1211 15:45:06.324386   10050 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:45:06.324405   10050 main.go:141] libmachine: STDERR: 
	I1211 15:45:06.324419   10050 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/disk.qcow2
	I1211 15:45:06.324424   10050 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:45:06.324433   10050 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:45:06.324456   10050 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:c1:7b:73:cf:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/false-736000/disk.qcow2
	I1211 15:45:06.326291   10050 main.go:141] libmachine: STDOUT: 
	I1211 15:45:06.326314   10050 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:45:06.326327   10050 client.go:171] duration metric: took 339.147667ms to LocalClient.Create
	I1211 15:45:08.328432   10050 start.go:128] duration metric: took 2.402317291s to createHost
	I1211 15:45:08.328478   10050 start.go:83] releasing machines lock for "false-736000", held for 2.402750625s
	W1211 15:45:08.328739   10050 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:08.345516   10050 out.go:201] 
	W1211 15:45:08.348548   10050 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:45:08.348572   10050 out.go:270] * 
	* 
	W1211 15:45:08.351302   10050 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:45:08.362414   10050 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.989195375s)

                                                
                                                
-- stdout --
	* [enable-default-cni-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-736000" primary control-plane node in "enable-default-cni-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:45:10.676456   10161 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:45:10.676625   10161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:45:10.676628   10161 out.go:358] Setting ErrFile to fd 2...
	I1211 15:45:10.676630   10161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:45:10.676773   10161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:45:10.677943   10161 out.go:352] Setting JSON to false
	I1211 15:45:10.695601   10161 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6280,"bootTime":1733954430,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:45:10.695673   10161 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:45:10.701195   10161 out.go:177] * [enable-default-cni-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:45:10.709271   10161 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:45:10.709325   10161 notify.go:220] Checking for updates...
	I1211 15:45:10.716216   10161 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:45:10.719235   10161 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:45:10.723197   10161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:45:10.726245   10161 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:45:10.729268   10161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:45:10.732602   10161 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:45:10.732682   10161 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:45:10.732732   10161 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:45:10.737245   10161 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:45:10.744261   10161 start.go:297] selected driver: qemu2
	I1211 15:45:10.744267   10161 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:45:10.744275   10161 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:45:10.746838   10161 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:45:10.751265   10161 out.go:177] * Automatically selected the socket_vmnet network
	E1211 15:45:10.754277   10161 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1211 15:45:10.754289   10161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:45:10.754305   10161 cni.go:84] Creating CNI manager for "bridge"
	I1211 15:45:10.754309   10161 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:45:10.754347   10161 start.go:340] cluster config:
	{Name:enable-default-cni-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:45:10.759077   10161 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:45:10.767295   10161 out.go:177] * Starting "enable-default-cni-736000" primary control-plane node in "enable-default-cni-736000" cluster
	I1211 15:45:10.771210   10161 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:45:10.771227   10161 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:45:10.771234   10161 cache.go:56] Caching tarball of preloaded images
	I1211 15:45:10.771314   10161 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:45:10.771320   10161 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:45:10.771398   10161 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/enable-default-cni-736000/config.json ...
	I1211 15:45:10.771410   10161 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/enable-default-cni-736000/config.json: {Name:mkbf242c26d70d0fe63516525f2b72cce722e79f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:45:10.771880   10161 start.go:360] acquireMachinesLock for enable-default-cni-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:45:10.771932   10161 start.go:364] duration metric: took 44.75µs to acquireMachinesLock for "enable-default-cni-736000"
	I1211 15:45:10.771942   10161 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:45:10.771972   10161 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:45:10.780147   10161 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:45:10.798504   10161 start.go:159] libmachine.API.Create for "enable-default-cni-736000" (driver="qemu2")
	I1211 15:45:10.798535   10161 client.go:168] LocalClient.Create starting
	I1211 15:45:10.798611   10161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:45:10.798650   10161 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:10.798663   10161 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:10.798706   10161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:45:10.798737   10161 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:10.798745   10161 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:10.799127   10161 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:45:10.959271   10161 main.go:141] libmachine: Creating SSH key...
	I1211 15:45:11.077609   10161 main.go:141] libmachine: Creating Disk image...
	I1211 15:45:11.077615   10161 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:45:11.077826   10161 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/disk.qcow2
	I1211 15:45:11.087638   10161 main.go:141] libmachine: STDOUT: 
	I1211 15:45:11.087660   10161 main.go:141] libmachine: STDERR: 
	I1211 15:45:11.087718   10161 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/disk.qcow2 +20000M
	I1211 15:45:11.096357   10161 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:45:11.096382   10161 main.go:141] libmachine: STDERR: 
	I1211 15:45:11.096398   10161 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/disk.qcow2
	I1211 15:45:11.096403   10161 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:45:11.096416   10161 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:45:11.096446   10161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:c4:f1:94:74:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/disk.qcow2
	I1211 15:45:11.098391   10161 main.go:141] libmachine: STDOUT: 
	I1211 15:45:11.098412   10161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:45:11.098434   10161 client.go:171] duration metric: took 299.9005ms to LocalClient.Create
	I1211 15:45:13.100556   10161 start.go:128] duration metric: took 2.328633916s to createHost
	I1211 15:45:13.100617   10161 start.go:83] releasing machines lock for "enable-default-cni-736000", held for 2.328747792s
	W1211 15:45:13.100662   10161 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:13.117906   10161 out.go:177] * Deleting "enable-default-cni-736000" in qemu2 ...
	W1211 15:45:13.146964   10161 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:13.146994   10161 start.go:729] Will try again in 5 seconds ...
	I1211 15:45:18.149017   10161 start.go:360] acquireMachinesLock for enable-default-cni-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:45:18.149603   10161 start.go:364] duration metric: took 496.875µs to acquireMachinesLock for "enable-default-cni-736000"
	I1211 15:45:18.149718   10161 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:45:18.149993   10161 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:45:18.166822   10161 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:45:18.215931   10161 start.go:159] libmachine.API.Create for "enable-default-cni-736000" (driver="qemu2")
	I1211 15:45:18.215973   10161 client.go:168] LocalClient.Create starting
	I1211 15:45:18.216100   10161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:45:18.216188   10161 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:18.216201   10161 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:18.216263   10161 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:45:18.216331   10161 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:18.216341   10161 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:18.217041   10161 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:45:18.389487   10161 main.go:141] libmachine: Creating SSH key...
	I1211 15:45:18.555304   10161 main.go:141] libmachine: Creating Disk image...
	I1211 15:45:18.555311   10161 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:45:18.555536   10161 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/disk.qcow2
	I1211 15:45:18.565994   10161 main.go:141] libmachine: STDOUT: 
	I1211 15:45:18.566021   10161 main.go:141] libmachine: STDERR: 
	I1211 15:45:18.566102   10161 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/disk.qcow2 +20000M
	I1211 15:45:18.575839   10161 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:45:18.575862   10161 main.go:141] libmachine: STDERR: 
	I1211 15:45:18.575881   10161 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/disk.qcow2
	I1211 15:45:18.575886   10161 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:45:18.575894   10161 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:45:18.575934   10161 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:52:ad:38:c1:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/enable-default-cni-736000/disk.qcow2
	I1211 15:45:18.578169   10161 main.go:141] libmachine: STDOUT: 
	I1211 15:45:18.578184   10161 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:45:18.578198   10161 client.go:171] duration metric: took 362.229417ms to LocalClient.Create
	I1211 15:45:20.580438   10161 start.go:128] duration metric: took 2.430475s to createHost
	I1211 15:45:20.580511   10161 start.go:83] releasing machines lock for "enable-default-cni-736000", held for 2.430959125s
	W1211 15:45:20.580994   10161 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:20.598803   10161 out.go:201] 
	W1211 15:45:20.602796   10161 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:45:20.602833   10161 out.go:270] * 
	* 
	W1211 15:45:20.605346   10161 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:45:20.617525   10161 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.862141583s)

                                                
                                                
-- stdout --
	* [flannel-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-736000" primary control-plane node in "flannel-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:45:22.929298   10270 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:45:22.929453   10270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:45:22.929457   10270 out.go:358] Setting ErrFile to fd 2...
	I1211 15:45:22.929459   10270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:45:22.929617   10270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:45:22.930768   10270 out.go:352] Setting JSON to false
	I1211 15:45:22.948330   10270 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6292,"bootTime":1733954430,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:45:22.948405   10270 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:45:22.955146   10270 out.go:177] * [flannel-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:45:22.963278   10270 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:45:22.963335   10270 notify.go:220] Checking for updates...
	I1211 15:45:22.971018   10270 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:45:22.975145   10270 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:45:22.979149   10270 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:45:22.982156   10270 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:45:22.985157   10270 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:45:22.988434   10270 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:45:22.988524   10270 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:45:22.988578   10270 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:45:22.992137   10270 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:45:22.999151   10270 start.go:297] selected driver: qemu2
	I1211 15:45:22.999157   10270 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:45:22.999166   10270 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:45:23.001839   10270 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:45:23.003267   10270 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:45:23.006294   10270 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:45:23.006319   10270 cni.go:84] Creating CNI manager for "flannel"
	I1211 15:45:23.006323   10270 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1211 15:45:23.006364   10270 start.go:340] cluster config:
	{Name:flannel-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:45:23.011372   10270 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:45:23.019170   10270 out.go:177] * Starting "flannel-736000" primary control-plane node in "flannel-736000" cluster
	I1211 15:45:23.023156   10270 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:45:23.023181   10270 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:45:23.023189   10270 cache.go:56] Caching tarball of preloaded images
	I1211 15:45:23.023271   10270 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:45:23.023277   10270 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:45:23.023341   10270 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/flannel-736000/config.json ...
	I1211 15:45:23.023352   10270 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/flannel-736000/config.json: {Name:mk8d612fae998b9caebd5a49895704088b3fcf27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:45:23.023794   10270 start.go:360] acquireMachinesLock for flannel-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:45:23.023842   10270 start.go:364] duration metric: took 42.167µs to acquireMachinesLock for "flannel-736000"
	I1211 15:45:23.023852   10270 start.go:93] Provisioning new machine with config: &{Name:flannel-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:45:23.023888   10270 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:45:23.032160   10270 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:45:23.049387   10270 start.go:159] libmachine.API.Create for "flannel-736000" (driver="qemu2")
	I1211 15:45:23.049411   10270 client.go:168] LocalClient.Create starting
	I1211 15:45:23.049485   10270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:45:23.049520   10270 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:23.049531   10270 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:23.049564   10270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:45:23.049596   10270 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:23.049605   10270 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:23.050058   10270 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:45:23.210573   10270 main.go:141] libmachine: Creating SSH key...
	I1211 15:45:23.267014   10270 main.go:141] libmachine: Creating Disk image...
	I1211 15:45:23.267020   10270 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:45:23.267248   10270 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/disk.qcow2
	I1211 15:45:23.277430   10270 main.go:141] libmachine: STDOUT: 
	I1211 15:45:23.277452   10270 main.go:141] libmachine: STDERR: 
	I1211 15:45:23.277503   10270 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/disk.qcow2 +20000M
	I1211 15:45:23.286030   10270 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:45:23.286050   10270 main.go:141] libmachine: STDERR: 
	I1211 15:45:23.286072   10270 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/disk.qcow2
	I1211 15:45:23.286076   10270 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:45:23.286088   10270 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:45:23.286114   10270 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:0a:ab:58:c7:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/disk.qcow2
	I1211 15:45:23.287956   10270 main.go:141] libmachine: STDOUT: 
	I1211 15:45:23.287970   10270 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:45:23.287992   10270 client.go:171] duration metric: took 238.580625ms to LocalClient.Create
	I1211 15:45:25.290121   10270 start.go:128] duration metric: took 2.266281417s to createHost
	I1211 15:45:25.290176   10270 start.go:83] releasing machines lock for "flannel-736000", held for 2.26639425s
	W1211 15:45:25.290227   10270 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:25.305358   10270 out.go:177] * Deleting "flannel-736000" in qemu2 ...
	W1211 15:45:25.333865   10270 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:25.333883   10270 start.go:729] Will try again in 5 seconds ...
	I1211 15:45:30.334508   10270 start.go:360] acquireMachinesLock for flannel-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:45:30.335009   10270 start.go:364] duration metric: took 419.417µs to acquireMachinesLock for "flannel-736000"
	I1211 15:45:30.335130   10270 start.go:93] Provisioning new machine with config: &{Name:flannel-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:45:30.335453   10270 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:45:30.353179   10270 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:45:30.401296   10270 start.go:159] libmachine.API.Create for "flannel-736000" (driver="qemu2")
	I1211 15:45:30.401361   10270 client.go:168] LocalClient.Create starting
	I1211 15:45:30.401504   10270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:45:30.401584   10270 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:30.401600   10270 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:30.401704   10270 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:45:30.401761   10270 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:30.401775   10270 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:30.402502   10270 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:45:30.573885   10270 main.go:141] libmachine: Creating SSH key...
	I1211 15:45:30.689708   10270 main.go:141] libmachine: Creating Disk image...
	I1211 15:45:30.689714   10270 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:45:30.689933   10270 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/disk.qcow2
	I1211 15:45:30.699972   10270 main.go:141] libmachine: STDOUT: 
	I1211 15:45:30.699989   10270 main.go:141] libmachine: STDERR: 
	I1211 15:45:30.700050   10270 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/disk.qcow2 +20000M
	I1211 15:45:30.708509   10270 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:45:30.708532   10270 main.go:141] libmachine: STDERR: 
	I1211 15:45:30.708549   10270 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/disk.qcow2
	I1211 15:45:30.708553   10270 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:45:30.708561   10270 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:45:30.708586   10270 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:c8:21:bf:74:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/flannel-736000/disk.qcow2
	I1211 15:45:30.710450   10270 main.go:141] libmachine: STDOUT: 
	I1211 15:45:30.710463   10270 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:45:30.710475   10270 client.go:171] duration metric: took 309.117958ms to LocalClient.Create
	I1211 15:45:32.712575   10270 start.go:128] duration metric: took 2.377147166s to createHost
	I1211 15:45:32.712678   10270 start.go:83] releasing machines lock for "flannel-736000", held for 2.377689375s
	W1211 15:45:32.713154   10270 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:32.727742   10270 out.go:201] 
	W1211 15:45:32.731897   10270 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:45:32.731936   10270 out.go:270] * 
	* 
	W1211 15:45:32.734381   10270 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:45:32.744765   10270 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.954138666s)

                                                
                                                
-- stdout --
	* [bridge-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-736000" primary control-plane node in "bridge-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:45:35.250980   10387 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:45:35.251125   10387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:45:35.251128   10387 out.go:358] Setting ErrFile to fd 2...
	I1211 15:45:35.251130   10387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:45:35.251271   10387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:45:35.252453   10387 out.go:352] Setting JSON to false
	I1211 15:45:35.271047   10387 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6305,"bootTime":1733954430,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:45:35.271113   10387 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:45:35.277650   10387 out.go:177] * [bridge-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:45:35.285677   10387 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:45:35.285749   10387 notify.go:220] Checking for updates...
	I1211 15:45:35.293617   10387 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:45:35.296590   10387 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:45:35.299643   10387 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:45:35.302555   10387 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:45:35.305614   10387 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:45:35.308977   10387 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:45:35.309068   10387 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:45:35.309109   10387 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:45:35.312546   10387 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:45:35.319665   10387 start.go:297] selected driver: qemu2
	I1211 15:45:35.319672   10387 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:45:35.319679   10387 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:45:35.322304   10387 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:45:35.326556   10387 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:45:35.329715   10387 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:45:35.329746   10387 cni.go:84] Creating CNI manager for "bridge"
	I1211 15:45:35.329758   10387 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:45:35.329792   10387 start.go:340] cluster config:
	{Name:bridge-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:45:35.334475   10387 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:45:35.342541   10387 out.go:177] * Starting "bridge-736000" primary control-plane node in "bridge-736000" cluster
	I1211 15:45:35.346602   10387 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:45:35.346626   10387 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:45:35.346641   10387 cache.go:56] Caching tarball of preloaded images
	I1211 15:45:35.346724   10387 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:45:35.346729   10387 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:45:35.346784   10387 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/bridge-736000/config.json ...
	I1211 15:45:35.346796   10387 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/bridge-736000/config.json: {Name:mkf36d0fc12de8ff5c8a0e465c8db6d16cd2d053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:45:35.347268   10387 start.go:360] acquireMachinesLock for bridge-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:45:35.347320   10387 start.go:364] duration metric: took 45.625µs to acquireMachinesLock for "bridge-736000"
	I1211 15:45:35.347331   10387 start.go:93] Provisioning new machine with config: &{Name:bridge-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:45:35.347371   10387 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:45:35.355665   10387 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:45:35.373533   10387 start.go:159] libmachine.API.Create for "bridge-736000" (driver="qemu2")
	I1211 15:45:35.373560   10387 client.go:168] LocalClient.Create starting
	I1211 15:45:35.373627   10387 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:45:35.373665   10387 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:35.373675   10387 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:35.373715   10387 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:45:35.373745   10387 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:35.373752   10387 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:35.374248   10387 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:45:35.534821   10387 main.go:141] libmachine: Creating SSH key...
	I1211 15:45:35.625059   10387 main.go:141] libmachine: Creating Disk image...
	I1211 15:45:35.625065   10387 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:45:35.625278   10387 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/disk.qcow2
	I1211 15:45:35.635090   10387 main.go:141] libmachine: STDOUT: 
	I1211 15:45:35.635113   10387 main.go:141] libmachine: STDERR: 
	I1211 15:45:35.635168   10387 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/disk.qcow2 +20000M
	I1211 15:45:35.643692   10387 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:45:35.643709   10387 main.go:141] libmachine: STDERR: 
	I1211 15:45:35.643729   10387 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/disk.qcow2
	I1211 15:45:35.643736   10387 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:45:35.643748   10387 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:45:35.643783   10387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:52:c6:fe:67:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/disk.qcow2
	I1211 15:45:35.645624   10387 main.go:141] libmachine: STDOUT: 
	I1211 15:45:35.645640   10387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:45:35.645659   10387 client.go:171] duration metric: took 272.100333ms to LocalClient.Create
	I1211 15:45:37.647779   10387 start.go:128] duration metric: took 2.300458042s to createHost
	I1211 15:45:37.647847   10387 start.go:83] releasing machines lock for "bridge-736000", held for 2.30058825s
	W1211 15:45:37.647891   10387 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:37.659782   10387 out.go:177] * Deleting "bridge-736000" in qemu2 ...
	W1211 15:45:37.688575   10387 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:37.688602   10387 start.go:729] Will try again in 5 seconds ...
	I1211 15:45:42.690601   10387 start.go:360] acquireMachinesLock for bridge-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:45:42.691161   10387 start.go:364] duration metric: took 429.292µs to acquireMachinesLock for "bridge-736000"
	I1211 15:45:42.691286   10387 start.go:93] Provisioning new machine with config: &{Name:bridge-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:45:42.691583   10387 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:45:42.710282   10387 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:45:42.758999   10387 start.go:159] libmachine.API.Create for "bridge-736000" (driver="qemu2")
	I1211 15:45:42.759053   10387 client.go:168] LocalClient.Create starting
	I1211 15:45:42.759179   10387 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:45:42.759255   10387 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:42.759272   10387 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:42.759338   10387 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:45:42.759397   10387 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:42.759411   10387 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:42.760267   10387 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:45:42.931220   10387 main.go:141] libmachine: Creating SSH key...
	I1211 15:45:43.095518   10387 main.go:141] libmachine: Creating Disk image...
	I1211 15:45:43.095525   10387 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:45:43.095773   10387 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/disk.qcow2
	I1211 15:45:43.106241   10387 main.go:141] libmachine: STDOUT: 
	I1211 15:45:43.106271   10387 main.go:141] libmachine: STDERR: 
	I1211 15:45:43.106328   10387 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/disk.qcow2 +20000M
	I1211 15:45:43.114817   10387 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:45:43.114833   10387 main.go:141] libmachine: STDERR: 
	I1211 15:45:43.114847   10387 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/disk.qcow2
	I1211 15:45:43.114851   10387 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:45:43.114859   10387 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:45:43.114895   10387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:f9:e7:73:b7:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/bridge-736000/disk.qcow2
	I1211 15:45:43.116725   10387 main.go:141] libmachine: STDOUT: 
	I1211 15:45:43.116744   10387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:45:43.116758   10387 client.go:171] duration metric: took 357.709708ms to LocalClient.Create
	I1211 15:45:45.118933   10387 start.go:128] duration metric: took 2.42737425s to createHost
	I1211 15:45:45.119006   10387 start.go:83] releasing machines lock for "bridge-736000", held for 2.427895625s
	W1211 15:45:45.119394   10387 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:45.135102   10387 out.go:201] 
	W1211 15:45:45.140253   10387 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:45:45.140286   10387 out.go:270] * 
	* 
	W1211 15:45:45.142943   10387 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:45:45.160059   10387 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-736000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.880996375s)

                                                
                                                
-- stdout --
	* [kubenet-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-736000" primary control-plane node in "kubenet-736000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-736000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:45:47.527433   10496 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:45:47.527591   10496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:45:47.527594   10496 out.go:358] Setting ErrFile to fd 2...
	I1211 15:45:47.527596   10496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:45:47.527733   10496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:45:47.528874   10496 out.go:352] Setting JSON to false
	I1211 15:45:47.546371   10496 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6317,"bootTime":1733954430,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:45:47.546453   10496 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:45:47.553474   10496 out.go:177] * [kubenet-736000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:45:47.561454   10496 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:45:47.561534   10496 notify.go:220] Checking for updates...
	I1211 15:45:47.568450   10496 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:45:47.572468   10496 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:45:47.575495   10496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:45:47.579445   10496 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:45:47.583420   10496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:45:47.586742   10496 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:45:47.586821   10496 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:45:47.586876   10496 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:45:47.591385   10496 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:45:47.598429   10496 start.go:297] selected driver: qemu2
	I1211 15:45:47.598434   10496 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:45:47.598439   10496 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:45:47.601082   10496 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:45:47.604558   10496 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:45:47.607551   10496 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:45:47.607581   10496 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1211 15:45:47.607610   10496 start.go:340] cluster config:
	{Name:kubenet-736000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubenet-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:45:47.612632   10496 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:45:47.620381   10496 out.go:177] * Starting "kubenet-736000" primary control-plane node in "kubenet-736000" cluster
	I1211 15:45:47.624443   10496 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:45:47.624459   10496 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:45:47.624467   10496 cache.go:56] Caching tarball of preloaded images
	I1211 15:45:47.624558   10496 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:45:47.624564   10496 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:45:47.624623   10496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/kubenet-736000/config.json ...
	I1211 15:45:47.624638   10496 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/kubenet-736000/config.json: {Name:mkd91694054c4d988f68cb48b57d72a07d547114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:45:47.624988   10496 start.go:360] acquireMachinesLock for kubenet-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:45:47.625038   10496 start.go:364] duration metric: took 44.625µs to acquireMachinesLock for "kubenet-736000"
	I1211 15:45:47.625058   10496 start.go:93] Provisioning new machine with config: &{Name:kubenet-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:45:47.625110   10496 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:45:47.633486   10496 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:45:47.651938   10496 start.go:159] libmachine.API.Create for "kubenet-736000" (driver="qemu2")
	I1211 15:45:47.651973   10496 client.go:168] LocalClient.Create starting
	I1211 15:45:47.652054   10496 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:45:47.652091   10496 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:47.652104   10496 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:47.652147   10496 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:45:47.652178   10496 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:47.652187   10496 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:47.652596   10496 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:45:47.812910   10496 main.go:141] libmachine: Creating SSH key...
	I1211 15:45:47.944515   10496 main.go:141] libmachine: Creating Disk image...
	I1211 15:45:47.944521   10496 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:45:47.944751   10496 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/disk.qcow2
	I1211 15:45:47.954716   10496 main.go:141] libmachine: STDOUT: 
	I1211 15:45:47.954732   10496 main.go:141] libmachine: STDERR: 
	I1211 15:45:47.954790   10496 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/disk.qcow2 +20000M
	I1211 15:45:47.963300   10496 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:45:47.963314   10496 main.go:141] libmachine: STDERR: 
	I1211 15:45:47.963328   10496 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/disk.qcow2
	I1211 15:45:47.963336   10496 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:45:47.963349   10496 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:45:47.963380   10496 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:ce:96:40:a7:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/disk.qcow2
	I1211 15:45:47.965252   10496 main.go:141] libmachine: STDOUT: 
	I1211 15:45:47.965264   10496 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:45:47.965283   10496 client.go:171] duration metric: took 313.310958ms to LocalClient.Create
	I1211 15:45:49.967390   10496 start.go:128] duration metric: took 2.34233225s to createHost
	I1211 15:45:49.967454   10496 start.go:83] releasing machines lock for "kubenet-736000", held for 2.342477583s
	W1211 15:45:49.967504   10496 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:49.978920   10496 out.go:177] * Deleting "kubenet-736000" in qemu2 ...
	W1211 15:45:50.012409   10496 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:50.012471   10496 start.go:729] Will try again in 5 seconds ...
	I1211 15:45:55.014600   10496 start.go:360] acquireMachinesLock for kubenet-736000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:45:55.015136   10496 start.go:364] duration metric: took 429.917µs to acquireMachinesLock for "kubenet-736000"
	I1211 15:45:55.015274   10496 start.go:93] Provisioning new machine with config: &{Name:kubenet-736000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-736000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:45:55.015543   10496 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:45:55.026745   10496 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1211 15:45:55.074576   10496 start.go:159] libmachine.API.Create for "kubenet-736000" (driver="qemu2")
	I1211 15:45:55.074627   10496 client.go:168] LocalClient.Create starting
	I1211 15:45:55.074766   10496 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:45:55.074852   10496 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:55.074868   10496 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:55.074946   10496 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:45:55.075002   10496 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:55.075014   10496 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:55.075809   10496 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:45:55.247852   10496 main.go:141] libmachine: Creating SSH key...
	I1211 15:45:55.307338   10496 main.go:141] libmachine: Creating Disk image...
	I1211 15:45:55.307343   10496 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:45:55.307567   10496 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/disk.qcow2
	I1211 15:45:55.317776   10496 main.go:141] libmachine: STDOUT: 
	I1211 15:45:55.317797   10496 main.go:141] libmachine: STDERR: 
	I1211 15:45:55.317859   10496 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/disk.qcow2 +20000M
	I1211 15:45:55.326432   10496 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:45:55.326454   10496 main.go:141] libmachine: STDERR: 
	I1211 15:45:55.326468   10496 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/disk.qcow2
	I1211 15:45:55.326474   10496 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:45:55.326482   10496 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:45:55.326512   10496 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:1f:27:e6:c7:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/kubenet-736000/disk.qcow2
	I1211 15:45:55.328345   10496 main.go:141] libmachine: STDOUT: 
	I1211 15:45:55.328362   10496 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:45:55.328375   10496 client.go:171] duration metric: took 253.75175ms to LocalClient.Create
	I1211 15:45:57.330499   10496 start.go:128] duration metric: took 2.314998375s to createHost
	I1211 15:45:57.330573   10496 start.go:83] releasing machines lock for "kubenet-736000", held for 2.315485625s
	W1211 15:45:57.331014   10496 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-736000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:45:57.344783   10496 out.go:201] 
	W1211 15:45:57.347970   10496 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:45:57.347997   10496 out.go:270] * 
	* 
	W1211 15:45:57.350489   10496 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:45:57.360792   10496 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-634000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-634000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.996181959s)

                                                
                                                
-- stdout --
	* [old-k8s-version-634000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-634000" primary control-plane node in "old-k8s-version-634000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-634000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:45:59.736811   10607 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:45:59.736993   10607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:45:59.736997   10607 out.go:358] Setting ErrFile to fd 2...
	I1211 15:45:59.736999   10607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:45:59.737123   10607 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:45:59.738332   10607 out.go:352] Setting JSON to false
	I1211 15:45:59.755867   10607 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6329,"bootTime":1733954430,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:45:59.755940   10607 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:45:59.760885   10607 out.go:177] * [old-k8s-version-634000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:45:59.768836   10607 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:45:59.768876   10607 notify.go:220] Checking for updates...
	I1211 15:45:59.776640   10607 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:45:59.779732   10607 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:45:59.783621   10607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:45:59.786645   10607 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:45:59.789746   10607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:45:59.793095   10607 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:45:59.793170   10607 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:45:59.793218   10607 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:45:59.797737   10607 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:45:59.804703   10607 start.go:297] selected driver: qemu2
	I1211 15:45:59.804710   10607 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:45:59.804722   10607 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:45:59.807363   10607 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:45:59.810730   10607 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:45:59.813823   10607 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:45:59.813847   10607 cni.go:84] Creating CNI manager for ""
	I1211 15:45:59.813877   10607 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1211 15:45:59.813904   10607 start.go:340] cluster config:
	{Name:old-k8s-version-634000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-634000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:45:59.818730   10607 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:45:59.826730   10607 out.go:177] * Starting "old-k8s-version-634000" primary control-plane node in "old-k8s-version-634000" cluster
	I1211 15:45:59.830573   10607 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1211 15:45:59.830591   10607 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1211 15:45:59.830598   10607 cache.go:56] Caching tarball of preloaded images
	I1211 15:45:59.830692   10607 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:45:59.830698   10607 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1211 15:45:59.830762   10607 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/old-k8s-version-634000/config.json ...
	I1211 15:45:59.830773   10607 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/old-k8s-version-634000/config.json: {Name:mk48514a484ce5632c8a11167547259327d0ad6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:45:59.831232   10607 start.go:360] acquireMachinesLock for old-k8s-version-634000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:45:59.831283   10607 start.go:364] duration metric: took 44.708µs to acquireMachinesLock for "old-k8s-version-634000"
	I1211 15:45:59.831294   10607 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-634000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-634000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:45:59.831326   10607 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:45:59.839552   10607 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:45:59.857554   10607 start.go:159] libmachine.API.Create for "old-k8s-version-634000" (driver="qemu2")
	I1211 15:45:59.857585   10607 client.go:168] LocalClient.Create starting
	I1211 15:45:59.857679   10607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:45:59.857722   10607 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:59.857733   10607 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:59.857771   10607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:45:59.857802   10607 main.go:141] libmachine: Decoding PEM data...
	I1211 15:45:59.857809   10607 main.go:141] libmachine: Parsing certificate...
	I1211 15:45:59.858195   10607 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:46:00.020802   10607 main.go:141] libmachine: Creating SSH key...
	I1211 15:46:00.177143   10607 main.go:141] libmachine: Creating Disk image...
	I1211 15:46:00.177151   10607 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:46:00.177401   10607 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I1211 15:46:00.187859   10607 main.go:141] libmachine: STDOUT: 
	I1211 15:46:00.187880   10607 main.go:141] libmachine: STDERR: 
	I1211 15:46:00.187946   10607 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2 +20000M
	I1211 15:46:00.196357   10607 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:46:00.196373   10607 main.go:141] libmachine: STDERR: 
	I1211 15:46:00.196389   10607 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I1211 15:46:00.196400   10607 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:46:00.196412   10607 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:46:00.196446   10607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:ac:46:c0:1a:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I1211 15:46:00.198205   10607 main.go:141] libmachine: STDOUT: 
	I1211 15:46:00.198221   10607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:46:00.198244   10607 client.go:171] duration metric: took 340.659291ms to LocalClient.Create
	I1211 15:46:02.200421   10607 start.go:128] duration metric: took 2.369087542s to createHost
	I1211 15:46:02.200470   10607 start.go:83] releasing machines lock for "old-k8s-version-634000", held for 2.369249375s
	W1211 15:46:02.200509   10607 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:02.211859   10607 out.go:177] * Deleting "old-k8s-version-634000" in qemu2 ...
	W1211 15:46:02.241748   10607 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:02.241773   10607 start.go:729] Will try again in 5 seconds ...
	I1211 15:46:07.243870   10607 start.go:360] acquireMachinesLock for old-k8s-version-634000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:46:07.244405   10607 start.go:364] duration metric: took 438.417µs to acquireMachinesLock for "old-k8s-version-634000"
	I1211 15:46:07.244518   10607 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-634000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-634000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:46:07.244746   10607 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:46:07.253400   10607 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:46:07.302106   10607 start.go:159] libmachine.API.Create for "old-k8s-version-634000" (driver="qemu2")
	I1211 15:46:07.302154   10607 client.go:168] LocalClient.Create starting
	I1211 15:46:07.302269   10607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:46:07.302349   10607 main.go:141] libmachine: Decoding PEM data...
	I1211 15:46:07.302366   10607 main.go:141] libmachine: Parsing certificate...
	I1211 15:46:07.302424   10607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:46:07.302478   10607 main.go:141] libmachine: Decoding PEM data...
	I1211 15:46:07.302490   10607 main.go:141] libmachine: Parsing certificate...
	I1211 15:46:07.303122   10607 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:46:07.475683   10607 main.go:141] libmachine: Creating SSH key...
	I1211 15:46:07.624229   10607 main.go:141] libmachine: Creating Disk image...
	I1211 15:46:07.624238   10607 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:46:07.624454   10607 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I1211 15:46:07.634578   10607 main.go:141] libmachine: STDOUT: 
	I1211 15:46:07.634606   10607 main.go:141] libmachine: STDERR: 
	I1211 15:46:07.634678   10607 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2 +20000M
	I1211 15:46:07.643573   10607 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:46:07.643591   10607 main.go:141] libmachine: STDERR: 
	I1211 15:46:07.643611   10607 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I1211 15:46:07.643615   10607 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:46:07.643627   10607 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:46:07.643657   10607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:69:36:99:40:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I1211 15:46:07.645444   10607 main.go:141] libmachine: STDOUT: 
	I1211 15:46:07.645458   10607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:46:07.645473   10607 client.go:171] duration metric: took 343.322417ms to LocalClient.Create
	I1211 15:46:09.647682   10607 start.go:128] duration metric: took 2.402978292s to createHost
	I1211 15:46:09.647741   10607 start.go:83] releasing machines lock for "old-k8s-version-634000", held for 2.403386375s
	W1211 15:46:09.648245   10607 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-634000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-634000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:09.664324   10607 out.go:201] 
	W1211 15:46:09.667451   10607 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:46:09.667493   10607 out.go:270] * 
	* 
	W1211 15:46:09.680891   10607 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:46:09.689368   10607 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-634000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (58.22125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-634000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-634000 create -f testdata/busybox.yaml: exit status 1 (31.231792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-634000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-634000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (34.788375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (34.7635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-634000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-634000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-634000 describe deploy/metrics-server -n kube-system: exit status 1 (28.201917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-634000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-634000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (34.51325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-634000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-634000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.210066958s)

                                                
                                                
-- stdout --
	* [old-k8s-version-634000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-634000" primary control-plane node in "old-k8s-version-634000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-634000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-634000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:46:12.014776   10662 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:46:12.014946   10662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:12.014949   10662 out.go:358] Setting ErrFile to fd 2...
	I1211 15:46:12.014951   10662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:12.015070   10662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:46:12.016399   10662 out.go:352] Setting JSON to false
	I1211 15:46:12.035701   10662 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6342,"bootTime":1733954430,"procs":546,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:46:12.035779   10662 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:46:12.040258   10662 out.go:177] * [old-k8s-version-634000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:46:12.048203   10662 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:46:12.048238   10662 notify.go:220] Checking for updates...
	I1211 15:46:12.056163   10662 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:46:12.059259   10662 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:46:12.062181   10662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:46:12.065159   10662 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:46:12.068221   10662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:46:12.071457   10662 config.go:182] Loaded profile config "old-k8s-version-634000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1211 15:46:12.075058   10662 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1211 15:46:12.078265   10662 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:46:12.082254   10662 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:46:12.089197   10662 start.go:297] selected driver: qemu2
	I1211 15:46:12.089202   10662 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-634000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-634000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:46:12.089257   10662 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:46:12.092041   10662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:46:12.092067   10662 cni.go:84] Creating CNI manager for ""
	I1211 15:46:12.092089   10662 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1211 15:46:12.092114   10662 start.go:340] cluster config:
	{Name:old-k8s-version-634000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-634000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:46:12.096970   10662 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:12.105176   10662 out.go:177] * Starting "old-k8s-version-634000" primary control-plane node in "old-k8s-version-634000" cluster
	I1211 15:46:12.108179   10662 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1211 15:46:12.108193   10662 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1211 15:46:12.108201   10662 cache.go:56] Caching tarball of preloaded images
	I1211 15:46:12.108273   10662 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:46:12.108279   10662 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1211 15:46:12.108336   10662 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/old-k8s-version-634000/config.json ...
	I1211 15:46:12.108913   10662 start.go:360] acquireMachinesLock for old-k8s-version-634000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:46:12.108945   10662 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "old-k8s-version-634000"
	I1211 15:46:12.108953   10662 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:46:12.108959   10662 fix.go:54] fixHost starting: 
	I1211 15:46:12.109080   10662 fix.go:112] recreateIfNeeded on old-k8s-version-634000: state=Stopped err=<nil>
	W1211 15:46:12.109089   10662 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:46:12.114155   10662 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-634000" ...
	I1211 15:46:12.121175   10662 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:46:12.121215   10662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:69:36:99:40:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I1211 15:46:12.123663   10662 main.go:141] libmachine: STDOUT: 
	I1211 15:46:12.123684   10662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:46:12.123716   10662 fix.go:56] duration metric: took 14.755333ms for fixHost
	I1211 15:46:12.123726   10662 start.go:83] releasing machines lock for "old-k8s-version-634000", held for 14.773583ms
	W1211 15:46:12.123733   10662 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:46:12.123773   10662 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:12.123777   10662 start.go:729] Will try again in 5 seconds ...
	I1211 15:46:17.125907   10662 start.go:360] acquireMachinesLock for old-k8s-version-634000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:46:17.126324   10662 start.go:364] duration metric: took 313.791µs to acquireMachinesLock for "old-k8s-version-634000"
	I1211 15:46:17.126447   10662 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:46:17.126463   10662 fix.go:54] fixHost starting: 
	I1211 15:46:17.127148   10662 fix.go:112] recreateIfNeeded on old-k8s-version-634000: state=Stopped err=<nil>
	W1211 15:46:17.127174   10662 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:46:17.134027   10662 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-634000" ...
	I1211 15:46:17.143197   10662 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:46:17.143470   10662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:69:36:99:40:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I1211 15:46:17.154575   10662 main.go:141] libmachine: STDOUT: 
	I1211 15:46:17.154650   10662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:46:17.154733   10662 fix.go:56] duration metric: took 28.268958ms for fixHost
	I1211 15:46:17.154761   10662 start.go:83] releasing machines lock for "old-k8s-version-634000", held for 28.412292ms
	W1211 15:46:17.154934   10662 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-634000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-634000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:17.162382   10662 out.go:201] 
	W1211 15:46:17.167359   10662 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:46:17.167414   10662 out.go:270] * 
	* 
	W1211 15:46:17.170208   10662 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:46:17.179099   10662 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-634000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (75.170083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-634000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (36.153542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-634000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-634000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-634000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.570583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-634000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-634000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (34.454584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-634000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (34.232333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-634000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-634000 --alsologtostderr -v=1: exit status 83 (45.218625ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-634000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-634000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:46:17.479250   10682 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:46:17.479847   10682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:17.479851   10682 out.go:358] Setting ErrFile to fd 2...
	I1211 15:46:17.479853   10682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:17.479974   10682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:46:17.480160   10682 out.go:352] Setting JSON to false
	I1211 15:46:17.480169   10682 mustload.go:65] Loading cluster: old-k8s-version-634000
	I1211 15:46:17.480389   10682 config.go:182] Loaded profile config "old-k8s-version-634000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1211 15:46:17.484044   10682 out.go:177] * The control-plane node old-k8s-version-634000 host is not running: state=Stopped
	I1211 15:46:17.487983   10682 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-634000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-634000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (34.5395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (34.865166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-854000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-854000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (10.023015667s)

                                                
                                                
-- stdout --
	* [no-preload-854000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-854000" primary control-plane node in "no-preload-854000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-854000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:46:17.835447   10699 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:46:17.835616   10699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:17.835619   10699 out.go:358] Setting ErrFile to fd 2...
	I1211 15:46:17.835622   10699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:17.835758   10699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:46:17.837101   10699 out.go:352] Setting JSON to false
	I1211 15:46:17.857335   10699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6347,"bootTime":1733954430,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:46:17.857417   10699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:46:17.863078   10699 out.go:177] * [no-preload-854000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:46:17.871014   10699 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:46:17.871059   10699 notify.go:220] Checking for updates...
	I1211 15:46:17.876623   10699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:46:17.881031   10699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:46:17.883993   10699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:46:17.886940   10699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:46:17.890003   10699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:46:17.893379   10699 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:46:17.893439   10699 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:46:17.893486   10699 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:46:17.896897   10699 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:46:17.904051   10699 start.go:297] selected driver: qemu2
	I1211 15:46:17.904059   10699 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:46:17.904069   10699 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:46:17.906985   10699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:46:17.909991   10699 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:46:17.913114   10699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:46:17.913133   10699 cni.go:84] Creating CNI manager for ""
	I1211 15:46:17.913155   10699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:46:17.913159   10699 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:46:17.913200   10699 start.go:340] cluster config:
	{Name:no-preload-854000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-854000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:46:17.918187   10699 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:17.926051   10699 out.go:177] * Starting "no-preload-854000" primary control-plane node in "no-preload-854000" cluster
	I1211 15:46:17.930040   10699 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:46:17.930111   10699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/no-preload-854000/config.json ...
	I1211 15:46:17.930127   10699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/no-preload-854000/config.json: {Name:mk91b86597a80524e4a25d22244cf2866606f1a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:46:17.930132   10699 cache.go:107] acquiring lock: {Name:mkc097e774b50d6e493e31a093813a0d5ca9f4c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:17.930132   10699 cache.go:107] acquiring lock: {Name:mk1dbbdaae6006ccbcbeac6463fe60cf87209f26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:17.930165   10699 cache.go:107] acquiring lock: {Name:mk99d8a04eb25a501c54cdef1080f6e0a1b38dc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:17.930185   10699 cache.go:107] acquiring lock: {Name:mk81d0cb2a2ae1a6ac310d7b77a4e035e87270d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:17.930206   10699 cache.go:115] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1211 15:46:17.930214   10699 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 89.375µs
	I1211 15:46:17.930283   10699 cache.go:107] acquiring lock: {Name:mk22869e3e35a69462852dde18b73aa97ddfa05f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:17.930302   10699 cache.go:107] acquiring lock: {Name:mk2701cbf109ab2e4e8926dc05f602208e6d5690 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:17.930310   10699 cache.go:107] acquiring lock: {Name:mk188e3e50531bfd86648336581ae93d4093204a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:17.930330   10699 cache.go:107] acquiring lock: {Name:mk4b57b8647ba31427db543ed1fc902501bfe199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:17.930371   10699 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1211 15:46:17.931191   10699 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1211 15:46:17.931199   10699 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1211 15:46:17.931348   10699 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1211 15:46:17.931366   10699 start.go:360] acquireMachinesLock for no-preload-854000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:46:17.931382   10699 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1211 15:46:17.931394   10699 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1211 15:46:17.931420   10699 start.go:364] duration metric: took 46.292µs to acquireMachinesLock for "no-preload-854000"
	I1211 15:46:17.931356   10699 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1211 15:46:17.931431   10699 start.go:93] Provisioning new machine with config: &{Name:no-preload-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-854000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:46:17.931481   10699 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1211 15:46:17.931484   10699 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:46:17.936969   10699 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:46:17.942328   10699 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1211 15:46:17.942345   10699 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1211 15:46:17.942324   10699 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1211 15:46:17.942427   10699 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1211 15:46:17.942452   10699 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1211 15:46:17.944282   10699 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1211 15:46:17.944415   10699 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1211 15:46:17.955013   10699 start.go:159] libmachine.API.Create for "no-preload-854000" (driver="qemu2")
	I1211 15:46:17.955035   10699 client.go:168] LocalClient.Create starting
	I1211 15:46:17.955123   10699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:46:17.955161   10699 main.go:141] libmachine: Decoding PEM data...
	I1211 15:46:17.955175   10699 main.go:141] libmachine: Parsing certificate...
	I1211 15:46:17.955221   10699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:46:17.955252   10699 main.go:141] libmachine: Decoding PEM data...
	I1211 15:46:17.955262   10699 main.go:141] libmachine: Parsing certificate...
	I1211 15:46:17.955642   10699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:46:18.128638   10699 main.go:141] libmachine: Creating SSH key...
	I1211 15:46:18.235335   10699 main.go:141] libmachine: Creating Disk image...
	I1211 15:46:18.235354   10699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:46:18.235891   10699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2
	I1211 15:46:18.245934   10699 main.go:141] libmachine: STDOUT: 
	I1211 15:46:18.245958   10699 main.go:141] libmachine: STDERR: 
	I1211 15:46:18.246024   10699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2 +20000M
	I1211 15:46:18.255585   10699 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:46:18.255607   10699 main.go:141] libmachine: STDERR: 
	I1211 15:46:18.255628   10699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2
	I1211 15:46:18.255632   10699 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:46:18.255646   10699 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:46:18.255680   10699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:4a:ca:49:1d:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2
	I1211 15:46:18.257929   10699 main.go:141] libmachine: STDOUT: 
	I1211 15:46:18.257947   10699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:46:18.257968   10699 client.go:171] duration metric: took 302.934708ms to LocalClient.Create
	I1211 15:46:18.472743   10699 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2
	I1211 15:46:18.476231   10699 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2
	I1211 15:46:18.489499   10699 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1211 15:46:18.534975   10699 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1211 15:46:18.599274   10699 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1211 15:46:18.638363   10699 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1211 15:46:18.663792   10699 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1211 15:46:18.663817   10699 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 733.587958ms
	I1211 15:46:18.663833   10699 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1211 15:46:18.687740   10699 cache.go:162] opening:  /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2
	I1211 15:46:20.258206   10699 start.go:128] duration metric: took 2.326772167s to createHost
	I1211 15:46:20.258260   10699 start.go:83] releasing machines lock for "no-preload-854000", held for 2.326899666s
	W1211 15:46:20.258297   10699 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:20.274166   10699 out.go:177] * Deleting "no-preload-854000" in qemu2 ...
	W1211 15:46:20.308376   10699 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:20.308408   10699 start.go:729] Will try again in 5 seconds ...
	I1211 15:46:22.139029   10699 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1211 15:46:22.139085   10699 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.208921458s
	I1211 15:46:22.139112   10699 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1211 15:46:22.661319   10699 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1211 15:46:22.661388   10699 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 4.731193583s
	I1211 15:46:22.661419   10699 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1211 15:46:23.175398   10699 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1211 15:46:23.175449   10699 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 5.245336375s
	I1211 15:46:23.175479   10699 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1211 15:46:23.228467   10699 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1211 15:46:23.228515   10699 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 5.298530166s
	I1211 15:46:23.228544   10699 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1211 15:46:23.304816   10699 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1211 15:46:23.304867   10699 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 5.374901958s
	I1211 15:46:23.304899   10699 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1211 15:46:25.308437   10699 start.go:360] acquireMachinesLock for no-preload-854000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:46:25.308931   10699 start.go:364] duration metric: took 404.25µs to acquireMachinesLock for "no-preload-854000"
	I1211 15:46:25.309036   10699 start.go:93] Provisioning new machine with config: &{Name:no-preload-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-854000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:46:25.309267   10699 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:46:25.330619   10699 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:46:25.381613   10699 start.go:159] libmachine.API.Create for "no-preload-854000" (driver="qemu2")
	I1211 15:46:25.381666   10699 client.go:168] LocalClient.Create starting
	I1211 15:46:25.381843   10699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:46:25.381932   10699 main.go:141] libmachine: Decoding PEM data...
	I1211 15:46:25.381950   10699 main.go:141] libmachine: Parsing certificate...
	I1211 15:46:25.382024   10699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:46:25.382081   10699 main.go:141] libmachine: Decoding PEM data...
	I1211 15:46:25.382098   10699 main.go:141] libmachine: Parsing certificate...
	I1211 15:46:25.382643   10699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:46:25.547299   10699 main.go:141] libmachine: Creating SSH key...
	I1211 15:46:25.752952   10699 main.go:141] libmachine: Creating Disk image...
	I1211 15:46:25.752960   10699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:46:25.753185   10699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2
	I1211 15:46:25.763320   10699 main.go:141] libmachine: STDOUT: 
	I1211 15:46:25.763341   10699 main.go:141] libmachine: STDERR: 
	I1211 15:46:25.763407   10699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2 +20000M
	I1211 15:46:25.772134   10699 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:46:25.772152   10699 main.go:141] libmachine: STDERR: 
	I1211 15:46:25.772167   10699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2
	I1211 15:46:25.772170   10699 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:46:25.772184   10699 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:46:25.772228   10699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:d2:af:e5:a4:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2
	I1211 15:46:25.774080   10699 main.go:141] libmachine: STDOUT: 
	I1211 15:46:25.774093   10699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:46:25.774114   10699 client.go:171] duration metric: took 392.454333ms to LocalClient.Create
	I1211 15:46:26.469123   10699 cache.go:157] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1211 15:46:26.469212   10699 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.539294708s
	I1211 15:46:26.469250   10699 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1211 15:46:26.469301   10699 cache.go:87] Successfully saved all images to host disk.
	I1211 15:46:27.774500   10699 start.go:128] duration metric: took 2.465277833s to createHost
	I1211 15:46:27.774565   10699 start.go:83] releasing machines lock for "no-preload-854000", held for 2.465686708s
	W1211 15:46:27.774892   10699 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-854000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-854000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:27.793624   10699 out.go:201] 
	W1211 15:46:27.797899   10699 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:46:27.797941   10699 out.go:270] * 
	* 
	W1211 15:46:27.800336   10699 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:46:27.810707   10699 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-854000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000: exit status 7 (71.8025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-854000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-854000 create -f testdata/busybox.yaml: exit status 1 (29.670125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-854000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-854000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000: exit status 7 (33.800292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-854000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000: exit status 7 (33.208459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-854000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-854000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-854000 describe deploy/metrics-server -n kube-system: exit status 1 (27.284125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-854000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-854000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000: exit status 7 (32.932333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-854000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-854000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.193731708s)

                                                
                                                
-- stdout --
	* [no-preload-854000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-854000" primary control-plane node in "no-preload-854000" cluster
	* Restarting existing qemu2 VM for "no-preload-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:46:31.955145   10775 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:46:31.955312   10775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:31.955315   10775 out.go:358] Setting ErrFile to fd 2...
	I1211 15:46:31.955318   10775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:31.955437   10775 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:46:31.956588   10775 out.go:352] Setting JSON to false
	I1211 15:46:31.973988   10775 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6361,"bootTime":1733954430,"procs":540,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:46:31.974056   10775 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:46:31.979302   10775 out.go:177] * [no-preload-854000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:46:31.986196   10775 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:46:31.986245   10775 notify.go:220] Checking for updates...
	I1211 15:46:31.993315   10775 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:46:31.997247   10775 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:46:32.000258   10775 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:46:32.003348   10775 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:46:32.006338   10775 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:46:32.009535   10775 config.go:182] Loaded profile config "no-preload-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:46:32.009812   10775 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:46:32.013281   10775 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:46:32.020244   10775 start.go:297] selected driver: qemu2
	I1211 15:46:32.020250   10775 start.go:901] validating driver "qemu2" against &{Name:no-preload-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:no-preload-854000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:46:32.020299   10775 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:46:32.022933   10775 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:46:32.022957   10775 cni.go:84] Creating CNI manager for ""
	I1211 15:46:32.022980   10775 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:46:32.023009   10775 start.go:340] cluster config:
	{Name:no-preload-854000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-854000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:46:32.027588   10775 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:32.036293   10775 out.go:177] * Starting "no-preload-854000" primary control-plane node in "no-preload-854000" cluster
	I1211 15:46:32.040317   10775 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:46:32.040380   10775 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/no-preload-854000/config.json ...
	I1211 15:46:32.040405   10775 cache.go:107] acquiring lock: {Name:mkc097e774b50d6e493e31a093813a0d5ca9f4c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:32.040408   10775 cache.go:107] acquiring lock: {Name:mk1dbbdaae6006ccbcbeac6463fe60cf87209f26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:32.040447   10775 cache.go:107] acquiring lock: {Name:mk99d8a04eb25a501c54cdef1080f6e0a1b38dc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:32.040483   10775 cache.go:115] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1211 15:46:32.040491   10775 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 92.459µs
	I1211 15:46:32.040503   10775 cache.go:115] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1211 15:46:32.040507   10775 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1211 15:46:32.040508   10775 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 111.041µs
	I1211 15:46:32.040513   10775 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1211 15:46:32.040513   10775 cache.go:107] acquiring lock: {Name:mk22869e3e35a69462852dde18b73aa97ddfa05f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:32.040527   10775 cache.go:115] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1211 15:46:32.040535   10775 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 90.5µs
	I1211 15:46:32.040539   10775 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1211 15:46:32.040555   10775 cache.go:107] acquiring lock: {Name:mk2701cbf109ab2e4e8926dc05f602208e6d5690 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:32.040569   10775 cache.go:115] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1211 15:46:32.040573   10775 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 60.209µs
	I1211 15:46:32.040576   10775 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1211 15:46:32.040582   10775 cache.go:107] acquiring lock: {Name:mk188e3e50531bfd86648336581ae93d4093204a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:32.040576   10775 cache.go:107] acquiring lock: {Name:mk81d0cb2a2ae1a6ac310d7b77a4e035e87270d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:32.040607   10775 cache.go:107] acquiring lock: {Name:mk4b57b8647ba31427db543ed1fc902501bfe199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:32.040648   10775 cache.go:115] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1211 15:46:32.040660   10775 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 126.958µs
	I1211 15:46:32.040664   10775 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1211 15:46:32.040652   10775 cache.go:115] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1211 15:46:32.040684   10775 cache.go:115] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1211 15:46:32.040684   10775 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 102.542µs
	I1211 15:46:32.040699   10775 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1211 15:46:32.040688   10775 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 141.25µs
	I1211 15:46:32.040704   10775 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1211 15:46:32.040723   10775 cache.go:115] /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1211 15:46:32.040727   10775 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 141.583µs
	I1211 15:46:32.040731   10775 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1211 15:46:32.040736   10775 cache.go:87] Successfully saved all images to host disk.
	I1211 15:46:32.040885   10775 start.go:360] acquireMachinesLock for no-preload-854000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:46:32.040920   10775 start.go:364] duration metric: took 28.916µs to acquireMachinesLock for "no-preload-854000"
	I1211 15:46:32.040929   10775 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:46:32.040933   10775 fix.go:54] fixHost starting: 
	I1211 15:46:32.041055   10775 fix.go:112] recreateIfNeeded on no-preload-854000: state=Stopped err=<nil>
	W1211 15:46:32.041063   10775 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:46:32.049299   10775 out.go:177] * Restarting existing qemu2 VM for "no-preload-854000" ...
	I1211 15:46:32.053281   10775 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:46:32.053318   10775 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:d2:af:e5:a4:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2
	I1211 15:46:32.055678   10775 main.go:141] libmachine: STDOUT: 
	I1211 15:46:32.055703   10775 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:46:32.055735   10775 fix.go:56] duration metric: took 14.798834ms for fixHost
	I1211 15:46:32.055741   10775 start.go:83] releasing machines lock for "no-preload-854000", held for 14.816375ms
	W1211 15:46:32.055746   10775 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:46:32.055789   10775 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:32.055793   10775 start.go:729] Will try again in 5 seconds ...
	I1211 15:46:37.057832   10775 start.go:360] acquireMachinesLock for no-preload-854000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:46:37.058434   10775 start.go:364] duration metric: took 475.875µs to acquireMachinesLock for "no-preload-854000"
	I1211 15:46:37.058631   10775 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:46:37.058654   10775 fix.go:54] fixHost starting: 
	I1211 15:46:37.059571   10775 fix.go:112] recreateIfNeeded on no-preload-854000: state=Stopped err=<nil>
	W1211 15:46:37.059612   10775 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:46:37.064267   10775 out.go:177] * Restarting existing qemu2 VM for "no-preload-854000" ...
	I1211 15:46:37.074140   10775 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:46:37.074484   10775 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:d2:af:e5:a4:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/no-preload-854000/disk.qcow2
	I1211 15:46:37.085494   10775 main.go:141] libmachine: STDOUT: 
	I1211 15:46:37.085588   10775 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:46:37.085655   10775 fix.go:56] duration metric: took 26.999042ms for fixHost
	I1211 15:46:37.085678   10775 start.go:83] releasing machines lock for "no-preload-854000", held for 27.183ms
	W1211 15:46:37.085920   10775 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-854000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-854000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:37.093133   10775 out.go:201] 
	W1211 15:46:37.096189   10775 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:46:37.096237   10775 out.go:270] * 
	* 
	W1211 15:46:37.098836   10775 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:46:37.108086   10775 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-854000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000: exit status 7 (70.853917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-854000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000: exit status 7 (35.37375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-854000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-854000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-854000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.952292ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-854000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-854000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000: exit status 7 (33.598208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-854000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000: exit status 7 (33.049458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-854000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-854000 --alsologtostderr -v=1: exit status 83 (44.470291ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-854000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:46:37.393472   10794 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:46:37.393650   10794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:37.393654   10794 out.go:358] Setting ErrFile to fd 2...
	I1211 15:46:37.393656   10794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:37.393779   10794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:46:37.393985   10794 out.go:352] Setting JSON to false
	I1211 15:46:37.393994   10794 mustload.go:65] Loading cluster: no-preload-854000
	I1211 15:46:37.394202   10794 config.go:182] Loaded profile config "no-preload-854000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:46:37.398241   10794 out.go:177] * The control-plane node no-preload-854000 host is not running: state=Stopped
	I1211 15:46:37.402308   10794 out.go:177]   To start a cluster, run: "minikube start -p no-preload-854000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-854000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000: exit status 7 (33.603125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-854000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000: exit status 7 (33.125416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-089000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-089000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.904139375s)

                                                
                                                
-- stdout --
	* [embed-certs-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-089000" primary control-plane node in "embed-certs-089000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-089000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:46:37.734865   10811 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:46:37.735016   10811 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:37.735019   10811 out.go:358] Setting ErrFile to fd 2...
	I1211 15:46:37.735022   10811 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:37.735166   10811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:46:37.736463   10811 out.go:352] Setting JSON to false
	I1211 15:46:37.754425   10811 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6367,"bootTime":1733954430,"procs":540,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:46:37.754500   10811 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:46:37.758320   10811 out.go:177] * [embed-certs-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:46:37.765314   10811 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:46:37.765350   10811 notify.go:220] Checking for updates...
	I1211 15:46:37.773243   10811 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:46:37.776259   10811 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:46:37.777835   10811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:46:37.781214   10811 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:46:37.784283   10811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:46:37.787562   10811 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:46:37.787626   10811 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:46:37.787672   10811 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:46:37.792278   10811 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:46:37.799252   10811 start.go:297] selected driver: qemu2
	I1211 15:46:37.799257   10811 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:46:37.799261   10811 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:46:37.801887   10811 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:46:37.805175   10811 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:46:37.809314   10811 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:46:37.809333   10811 cni.go:84] Creating CNI manager for ""
	I1211 15:46:37.809358   10811 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:46:37.809365   10811 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:46:37.809392   10811 start.go:340] cluster config:
	{Name:embed-certs-089000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:46:37.814250   10811 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:37.822285   10811 out.go:177] * Starting "embed-certs-089000" primary control-plane node in "embed-certs-089000" cluster
	I1211 15:46:37.826244   10811 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:46:37.826259   10811 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:46:37.826270   10811 cache.go:56] Caching tarball of preloaded images
	I1211 15:46:37.826346   10811 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:46:37.826353   10811 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:46:37.826421   10811 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/embed-certs-089000/config.json ...
	I1211 15:46:37.826432   10811 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/embed-certs-089000/config.json: {Name:mk572d5cdad945e8b80e28f85c8537b3ef604ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:46:37.826705   10811 start.go:360] acquireMachinesLock for embed-certs-089000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:46:37.826756   10811 start.go:364] duration metric: took 44.833µs to acquireMachinesLock for "embed-certs-089000"
	I1211 15:46:37.826767   10811 start.go:93] Provisioning new machine with config: &{Name:embed-certs-089000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:46:37.826794   10811 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:46:37.835268   10811 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:46:37.853000   10811 start.go:159] libmachine.API.Create for "embed-certs-089000" (driver="qemu2")
	I1211 15:46:37.853030   10811 client.go:168] LocalClient.Create starting
	I1211 15:46:37.853108   10811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:46:37.853151   10811 main.go:141] libmachine: Decoding PEM data...
	I1211 15:46:37.853164   10811 main.go:141] libmachine: Parsing certificate...
	I1211 15:46:37.853209   10811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:46:37.853239   10811 main.go:141] libmachine: Decoding PEM data...
	I1211 15:46:37.853249   10811 main.go:141] libmachine: Parsing certificate...
	I1211 15:46:37.853661   10811 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:46:38.008188   10811 main.go:141] libmachine: Creating SSH key...
	I1211 15:46:38.158384   10811 main.go:141] libmachine: Creating Disk image...
	I1211 15:46:38.158391   10811 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:46:38.158621   10811 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2
	I1211 15:46:38.168958   10811 main.go:141] libmachine: STDOUT: 
	I1211 15:46:38.168978   10811 main.go:141] libmachine: STDERR: 
	I1211 15:46:38.169036   10811 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2 +20000M
	I1211 15:46:38.177949   10811 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:46:38.177965   10811 main.go:141] libmachine: STDERR: 
	I1211 15:46:38.177980   10811 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2
	I1211 15:46:38.177983   10811 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:46:38.177995   10811 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:46:38.178030   10811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:e1:42:91:1c:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2
	I1211 15:46:38.179937   10811 main.go:141] libmachine: STDOUT: 
	I1211 15:46:38.179951   10811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:46:38.179974   10811 client.go:171] duration metric: took 326.94575ms to LocalClient.Create
	I1211 15:46:40.182143   10811 start.go:128] duration metric: took 2.355399667s to createHost
	I1211 15:46:40.182206   10811 start.go:83] releasing machines lock for "embed-certs-089000", held for 2.355513375s
	W1211 15:46:40.182253   10811 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:40.193484   10811 out.go:177] * Deleting "embed-certs-089000" in qemu2 ...
	W1211 15:46:40.223564   10811 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:40.223602   10811 start.go:729] Will try again in 5 seconds ...
	I1211 15:46:45.225716   10811 start.go:360] acquireMachinesLock for embed-certs-089000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:46:45.226290   10811 start.go:364] duration metric: took 479.125µs to acquireMachinesLock for "embed-certs-089000"
	I1211 15:46:45.226398   10811 start.go:93] Provisioning new machine with config: &{Name:embed-certs-089000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:46:45.226636   10811 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:46:45.246517   10811 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:46:45.295657   10811 start.go:159] libmachine.API.Create for "embed-certs-089000" (driver="qemu2")
	I1211 15:46:45.295716   10811 client.go:168] LocalClient.Create starting
	I1211 15:46:45.295854   10811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:46:45.295922   10811 main.go:141] libmachine: Decoding PEM data...
	I1211 15:46:45.295942   10811 main.go:141] libmachine: Parsing certificate...
	I1211 15:46:45.296001   10811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:46:45.296056   10811 main.go:141] libmachine: Decoding PEM data...
	I1211 15:46:45.296072   10811 main.go:141] libmachine: Parsing certificate...
	I1211 15:46:45.296757   10811 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:46:45.461969   10811 main.go:141] libmachine: Creating SSH key...
	I1211 15:46:45.535588   10811 main.go:141] libmachine: Creating Disk image...
	I1211 15:46:45.535593   10811 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:46:45.535798   10811 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2
	I1211 15:46:45.545333   10811 main.go:141] libmachine: STDOUT: 
	I1211 15:46:45.545357   10811 main.go:141] libmachine: STDERR: 
	I1211 15:46:45.545414   10811 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2 +20000M
	I1211 15:46:45.553661   10811 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:46:45.553682   10811 main.go:141] libmachine: STDERR: 
	I1211 15:46:45.553694   10811 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2
	I1211 15:46:45.553703   10811 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:46:45.553711   10811 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:46:45.553739   10811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:84:c6:7c:6d:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2
	I1211 15:46:45.555381   10811 main.go:141] libmachine: STDOUT: 
	I1211 15:46:45.555399   10811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:46:45.555412   10811 client.go:171] duration metric: took 259.698917ms to LocalClient.Create
	I1211 15:46:47.557521   10811 start.go:128] duration metric: took 2.330926417s to createHost
	I1211 15:46:47.557580   10811 start.go:83] releasing machines lock for "embed-certs-089000", held for 2.331335416s
	W1211 15:46:47.557964   10811 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-089000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-089000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:47.574678   10811 out.go:201] 
	W1211 15:46:47.577892   10811 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:46:47.577945   10811 out.go:270] * 
	* 
	W1211 15:46:47.580369   10811 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:46:47.592648   10811 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-089000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (71.070208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-089000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-089000 create -f testdata/busybox.yaml: exit status 1 (28.778459ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-089000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-089000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (32.510167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (32.173625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-089000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-089000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-089000 describe deploy/metrics-server -n kube-system: exit status 1 (27.147084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-089000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-089000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (33.021583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-089000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-089000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.188819917s)

                                                
                                                
-- stdout --
	* [embed-certs-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-089000" primary control-plane node in "embed-certs-089000" cluster
	* Restarting existing qemu2 VM for "embed-certs-089000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-089000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:46:51.472612   10863 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:46:51.472771   10863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:51.472774   10863 out.go:358] Setting ErrFile to fd 2...
	I1211 15:46:51.472777   10863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:51.472907   10863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:46:51.474119   10863 out.go:352] Setting JSON to false
	I1211 15:46:51.492072   10863 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6381,"bootTime":1733954430,"procs":542,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:46:51.492156   10863 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:46:51.497456   10863 out.go:177] * [embed-certs-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:46:51.505467   10863 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:46:51.505502   10863 notify.go:220] Checking for updates...
	I1211 15:46:51.512413   10863 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:46:51.515420   10863 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:46:51.518465   10863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:46:51.519923   10863 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:46:51.523419   10863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:46:51.526759   10863 config.go:182] Loaded profile config "embed-certs-089000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:46:51.527039   10863 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:46:51.528878   10863 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:46:51.536495   10863 start.go:297] selected driver: qemu2
	I1211 15:46:51.536500   10863 start.go:901] validating driver "qemu2" against &{Name:embed-certs-089000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:embed-certs-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:46:51.536563   10863 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:46:51.539297   10863 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:46:51.539324   10863 cni.go:84] Creating CNI manager for ""
	I1211 15:46:51.539349   10863 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:46:51.539374   10863 start.go:340] cluster config:
	{Name:embed-certs-089000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-089000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:46:51.543925   10863 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:51.552444   10863 out.go:177] * Starting "embed-certs-089000" primary control-plane node in "embed-certs-089000" cluster
	I1211 15:46:51.556398   10863 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:46:51.556413   10863 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:46:51.556421   10863 cache.go:56] Caching tarball of preloaded images
	I1211 15:46:51.556487   10863 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:46:51.556492   10863 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:46:51.556549   10863 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/embed-certs-089000/config.json ...
	I1211 15:46:51.556971   10863 start.go:360] acquireMachinesLock for embed-certs-089000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:46:51.556999   10863 start.go:364] duration metric: took 22µs to acquireMachinesLock for "embed-certs-089000"
	I1211 15:46:51.557007   10863 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:46:51.557013   10863 fix.go:54] fixHost starting: 
	I1211 15:46:51.557125   10863 fix.go:112] recreateIfNeeded on embed-certs-089000: state=Stopped err=<nil>
	W1211 15:46:51.557133   10863 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:46:51.565415   10863 out.go:177] * Restarting existing qemu2 VM for "embed-certs-089000" ...
	I1211 15:46:51.569418   10863 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:46:51.569450   10863 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:84:c6:7c:6d:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2
	I1211 15:46:51.571711   10863 main.go:141] libmachine: STDOUT: 
	I1211 15:46:51.571731   10863 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:46:51.571761   10863 fix.go:56] duration metric: took 14.74675ms for fixHost
	I1211 15:46:51.571766   10863 start.go:83] releasing machines lock for "embed-certs-089000", held for 14.763709ms
	W1211 15:46:51.571772   10863 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:46:51.571817   10863 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:51.571822   10863 start.go:729] Will try again in 5 seconds ...
	I1211 15:46:56.573933   10863 start.go:360] acquireMachinesLock for embed-certs-089000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:46:56.574387   10863 start.go:364] duration metric: took 337.75µs to acquireMachinesLock for "embed-certs-089000"
	I1211 15:46:56.574523   10863 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:46:56.574547   10863 fix.go:54] fixHost starting: 
	I1211 15:46:56.575262   10863 fix.go:112] recreateIfNeeded on embed-certs-089000: state=Stopped err=<nil>
	W1211 15:46:56.575292   10863 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:46:56.579728   10863 out.go:177] * Restarting existing qemu2 VM for "embed-certs-089000" ...
	I1211 15:46:56.583614   10863 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:46:56.583848   10863 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:84:c6:7c:6d:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/embed-certs-089000/disk.qcow2
	I1211 15:46:56.593902   10863 main.go:141] libmachine: STDOUT: 
	I1211 15:46:56.593976   10863 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:46:56.594065   10863 fix.go:56] duration metric: took 19.523208ms for fixHost
	I1211 15:46:56.594089   10863 start.go:83] releasing machines lock for "embed-certs-089000", held for 19.67775ms
	W1211 15:46:56.594326   10863 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-089000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-089000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:56.601594   10863 out.go:201] 
	W1211 15:46:56.604727   10863 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:46:56.604755   10863 out.go:270] * 
	* 
	W1211 15:46:56.606472   10863 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:46:56.615728   10863 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-089000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (68.685958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-089000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (34.270209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-089000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-089000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-089000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.816333ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-089000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-089000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (31.935625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-089000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (32.45925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-089000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-089000 --alsologtostderr -v=1: exit status 83 (45.184542ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-089000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-089000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:46:56.899525   10889 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:46:56.899696   10889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:56.899700   10889 out.go:358] Setting ErrFile to fd 2...
	I1211 15:46:56.899703   10889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:56.899828   10889 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:46:56.900050   10889 out.go:352] Setting JSON to false
	I1211 15:46:56.900058   10889 mustload.go:65] Loading cluster: embed-certs-089000
	I1211 15:46:56.900287   10889 config.go:182] Loaded profile config "embed-certs-089000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:46:56.904860   10889 out.go:177] * The control-plane node embed-certs-089000 host is not running: state=Stopped
	I1211 15:46:56.909004   10889 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-089000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-089000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (32.48725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (31.675333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-872000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-872000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (10.19378725s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-872000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-872000" primary control-plane node in "default-k8s-diff-port-872000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-872000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:46:57.343810   10913 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:46:57.343956   10913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:57.343960   10913 out.go:358] Setting ErrFile to fd 2...
	I1211 15:46:57.343962   10913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:46:57.344095   10913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:46:57.345166   10913 out.go:352] Setting JSON to false
	I1211 15:46:57.362413   10913 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6387,"bootTime":1733954430,"procs":539,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:46:57.362487   10913 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:46:57.366980   10913 out.go:177] * [default-k8s-diff-port-872000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:46:57.373950   10913 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:46:57.373995   10913 notify.go:220] Checking for updates...
	I1211 15:46:57.380879   10913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:46:57.384946   10913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:46:57.386362   10913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:46:57.389956   10913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:46:57.393033   10913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:46:57.396269   10913 config.go:182] Loaded profile config "cert-expiration-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:46:57.396331   10913 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:46:57.396376   10913 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:46:57.400917   10913 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:46:57.407988   10913 start.go:297] selected driver: qemu2
	I1211 15:46:57.407994   10913 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:46:57.408001   10913 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:46:57.410331   10913 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:46:57.413887   10913 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:46:57.417038   10913 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:46:57.417055   10913 cni.go:84] Creating CNI manager for ""
	I1211 15:46:57.417077   10913 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:46:57.417081   10913 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:46:57.417115   10913 start.go:340] cluster config:
	{Name:default-k8s-diff-port-872000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-872000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:46:57.421712   10913 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:46:57.429933   10913 out.go:177] * Starting "default-k8s-diff-port-872000" primary control-plane node in "default-k8s-diff-port-872000" cluster
	I1211 15:46:57.433937   10913 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:46:57.433961   10913 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:46:57.433967   10913 cache.go:56] Caching tarball of preloaded images
	I1211 15:46:57.434050   10913 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:46:57.434056   10913 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:46:57.434117   10913 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/default-k8s-diff-port-872000/config.json ...
	I1211 15:46:57.434128   10913 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/default-k8s-diff-port-872000/config.json: {Name:mk093fd2693f2c54b4e03ddb50404eb47e501c3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:46:57.434535   10913 start.go:360] acquireMachinesLock for default-k8s-diff-port-872000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:46:57.434590   10913 start.go:364] duration metric: took 44.625µs to acquireMachinesLock for "default-k8s-diff-port-872000"
	I1211 15:46:57.434602   10913 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-872000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-872000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:46:57.434649   10913 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:46:57.442977   10913 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:46:57.459928   10913 start.go:159] libmachine.API.Create for "default-k8s-diff-port-872000" (driver="qemu2")
	I1211 15:46:57.459954   10913 client.go:168] LocalClient.Create starting
	I1211 15:46:57.460031   10913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:46:57.460072   10913 main.go:141] libmachine: Decoding PEM data...
	I1211 15:46:57.460083   10913 main.go:141] libmachine: Parsing certificate...
	I1211 15:46:57.460124   10913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:46:57.460155   10913 main.go:141] libmachine: Decoding PEM data...
	I1211 15:46:57.460164   10913 main.go:141] libmachine: Parsing certificate...
	I1211 15:46:57.460565   10913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:46:57.615159   10913 main.go:141] libmachine: Creating SSH key...
	I1211 15:46:57.761679   10913 main.go:141] libmachine: Creating Disk image...
	I1211 15:46:57.761689   10913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:46:57.761919   10913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2
	I1211 15:46:57.771790   10913 main.go:141] libmachine: STDOUT: 
	I1211 15:46:57.771811   10913 main.go:141] libmachine: STDERR: 
	I1211 15:46:57.771872   10913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2 +20000M
	I1211 15:46:57.780086   10913 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:46:57.780103   10913 main.go:141] libmachine: STDERR: 
	I1211 15:46:57.780117   10913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2
	I1211 15:46:57.780124   10913 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:46:57.780136   10913 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:46:57.780160   10913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:2f:de:81:5a:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2
	I1211 15:46:57.781872   10913 main.go:141] libmachine: STDOUT: 
	I1211 15:46:57.781887   10913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:46:57.781908   10913 client.go:171] duration metric: took 321.955167ms to LocalClient.Create
	I1211 15:46:59.783999   10913 start.go:128] duration metric: took 2.349400333s to createHost
	I1211 15:46:59.784076   10913 start.go:83] releasing machines lock for "default-k8s-diff-port-872000", held for 2.349548542s
	W1211 15:46:59.784184   10913 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:59.810122   10913 out.go:177] * Deleting "default-k8s-diff-port-872000" in qemu2 ...
	W1211 15:46:59.874623   10913 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:46:59.874667   10913 start.go:729] Will try again in 5 seconds ...
	I1211 15:47:04.876800   10913 start.go:360] acquireMachinesLock for default-k8s-diff-port-872000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:47:04.877342   10913 start.go:364] duration metric: took 435.75µs to acquireMachinesLock for "default-k8s-diff-port-872000"
	I1211 15:47:04.877509   10913 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-872000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-872000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:47:04.877889   10913 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:47:04.888538   10913 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:47:04.936184   10913 start.go:159] libmachine.API.Create for "default-k8s-diff-port-872000" (driver="qemu2")
	I1211 15:47:04.936243   10913 client.go:168] LocalClient.Create starting
	I1211 15:47:04.936368   10913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:47:04.936443   10913 main.go:141] libmachine: Decoding PEM data...
	I1211 15:47:04.936460   10913 main.go:141] libmachine: Parsing certificate...
	I1211 15:47:04.936518   10913 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:47:04.936599   10913 main.go:141] libmachine: Decoding PEM data...
	I1211 15:47:04.936612   10913 main.go:141] libmachine: Parsing certificate...
	I1211 15:47:04.937413   10913 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:47:05.121772   10913 main.go:141] libmachine: Creating SSH key...
	I1211 15:47:05.446804   10913 main.go:141] libmachine: Creating Disk image...
	I1211 15:47:05.446815   10913 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:47:05.447039   10913 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2
	I1211 15:47:05.457154   10913 main.go:141] libmachine: STDOUT: 
	I1211 15:47:05.457175   10913 main.go:141] libmachine: STDERR: 
	I1211 15:47:05.457252   10913 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2 +20000M
	I1211 15:47:05.465769   10913 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:47:05.465781   10913 main.go:141] libmachine: STDERR: 
	I1211 15:47:05.465798   10913 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2
	I1211 15:47:05.465806   10913 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:47:05.465814   10913 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:47:05.465848   10913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:b6:c8:5c:24:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2
	I1211 15:47:05.467539   10913 main.go:141] libmachine: STDOUT: 
	I1211 15:47:05.467552   10913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:47:05.467568   10913 client.go:171] duration metric: took 531.336417ms to LocalClient.Create
	I1211 15:47:07.469730   10913 start.go:128] duration metric: took 2.591884667s to createHost
	I1211 15:47:07.469786   10913 start.go:83] releasing machines lock for "default-k8s-diff-port-872000", held for 2.592500875s
	W1211 15:47:07.470134   10913 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-872000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-872000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:47:07.474445   10913 out.go:201] 
	W1211 15:47:07.482233   10913 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:47:07.482267   10913 out.go:270] * 
	* 
	W1211 15:47:07.484749   10913 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:47:07.492168   10913 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-872000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000: exit status 7 (70.163459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-872000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-945000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-945000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.964517542s)

                                                
                                                
-- stdout --
	* [newest-cni-945000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-945000" primary control-plane node in "newest-cni-945000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-945000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:47:00.090900   10929 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:47:00.091055   10929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:47:00.091058   10929 out.go:358] Setting ErrFile to fd 2...
	I1211 15:47:00.091060   10929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:47:00.091192   10929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:47:00.092464   10929 out.go:352] Setting JSON to false
	I1211 15:47:00.110424   10929 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6390,"bootTime":1733954430,"procs":539,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:47:00.110499   10929 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:47:00.116039   10929 out.go:177] * [newest-cni-945000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:47:00.123965   10929 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:47:00.124028   10929 notify.go:220] Checking for updates...
	I1211 15:47:00.131817   10929 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:47:00.134999   10929 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:47:00.137984   10929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:47:00.141012   10929 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:47:00.143971   10929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:47:00.147344   10929 config.go:182] Loaded profile config "default-k8s-diff-port-872000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:47:00.147418   10929 config.go:182] Loaded profile config "multinode-921000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:47:00.147465   10929 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:47:00.151980   10929 out.go:177] * Using the qemu2 driver based on user configuration
	I1211 15:47:00.158969   10929 start.go:297] selected driver: qemu2
	I1211 15:47:00.158974   10929 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:47:00.158980   10929 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:47:00.161689   10929 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1211 15:47:00.161732   10929 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1211 15:47:00.164996   10929 out.go:177] * Automatically selected the socket_vmnet network
	I1211 15:47:00.172079   10929 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1211 15:47:00.172095   10929 cni.go:84] Creating CNI manager for ""
	I1211 15:47:00.172121   10929 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:47:00.172128   10929 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:47:00.172156   10929 start.go:340] cluster config:
	{Name:newest-cni-945000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-945000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:47:00.176990   10929 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:47:00.184940   10929 out.go:177] * Starting "newest-cni-945000" primary control-plane node in "newest-cni-945000" cluster
	I1211 15:47:00.188966   10929 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:47:00.188984   10929 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:47:00.188994   10929 cache.go:56] Caching tarball of preloaded images
	I1211 15:47:00.189075   10929 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:47:00.189082   10929 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:47:00.189138   10929 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/newest-cni-945000/config.json ...
	I1211 15:47:00.189150   10929 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/newest-cni-945000/config.json: {Name:mk3a74130fc10476ca4494c7af74f9caac2fc1ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:47:00.189428   10929 start.go:360] acquireMachinesLock for newest-cni-945000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:47:00.189480   10929 start.go:364] duration metric: took 45.958µs to acquireMachinesLock for "newest-cni-945000"
	I1211 15:47:00.189491   10929 start.go:93] Provisioning new machine with config: &{Name:newest-cni-945000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-945000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:47:00.189533   10929 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:47:00.198009   10929 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:47:00.215900   10929 start.go:159] libmachine.API.Create for "newest-cni-945000" (driver="qemu2")
	I1211 15:47:00.215935   10929 client.go:168] LocalClient.Create starting
	I1211 15:47:00.216010   10929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:47:00.216049   10929 main.go:141] libmachine: Decoding PEM data...
	I1211 15:47:00.216059   10929 main.go:141] libmachine: Parsing certificate...
	I1211 15:47:00.216109   10929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:47:00.216140   10929 main.go:141] libmachine: Decoding PEM data...
	I1211 15:47:00.216155   10929 main.go:141] libmachine: Parsing certificate...
	I1211 15:47:00.216567   10929 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:47:00.382398   10929 main.go:141] libmachine: Creating SSH key...
	I1211 15:47:00.541638   10929 main.go:141] libmachine: Creating Disk image...
	I1211 15:47:00.541645   10929 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:47:00.541871   10929 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2
	I1211 15:47:00.551947   10929 main.go:141] libmachine: STDOUT: 
	I1211 15:47:00.551963   10929 main.go:141] libmachine: STDERR: 
	I1211 15:47:00.552023   10929 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2 +20000M
	I1211 15:47:00.560504   10929 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:47:00.560519   10929 main.go:141] libmachine: STDERR: 
	I1211 15:47:00.560531   10929 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2
	I1211 15:47:00.560546   10929 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:47:00.560562   10929 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:47:00.560596   10929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:39:4d:36:56:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2
	I1211 15:47:00.562354   10929 main.go:141] libmachine: STDOUT: 
	I1211 15:47:00.562374   10929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:47:00.562396   10929 client.go:171] duration metric: took 346.466333ms to LocalClient.Create
	I1211 15:47:02.564612   10929 start.go:128] duration metric: took 2.375123583s to createHost
	I1211 15:47:02.564676   10929 start.go:83] releasing machines lock for "newest-cni-945000", held for 2.375260334s
	W1211 15:47:02.564730   10929 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:47:02.575762   10929 out.go:177] * Deleting "newest-cni-945000" in qemu2 ...
	W1211 15:47:02.610140   10929 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:47:02.610185   10929 start.go:729] Will try again in 5 seconds ...
	I1211 15:47:07.610816   10929 start.go:360] acquireMachinesLock for newest-cni-945000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:47:07.610891   10929 start.go:364] duration metric: took 56.667µs to acquireMachinesLock for "newest-cni-945000"
	I1211 15:47:07.610906   10929 start.go:93] Provisioning new machine with config: &{Name:newest-cni-945000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-945000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1211 15:47:07.610956   10929 start.go:125] createHost starting for "" (driver="qemu2")
	I1211 15:47:07.616759   10929 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 15:47:07.632155   10929 start.go:159] libmachine.API.Create for "newest-cni-945000" (driver="qemu2")
	I1211 15:47:07.632186   10929 client.go:168] LocalClient.Create starting
	I1211 15:47:07.632258   10929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/ca.pem
	I1211 15:47:07.632301   10929 main.go:141] libmachine: Decoding PEM data...
	I1211 15:47:07.632310   10929 main.go:141] libmachine: Parsing certificate...
	I1211 15:47:07.632351   10929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20083-6627/.minikube/certs/cert.pem
	I1211 15:47:07.632368   10929 main.go:141] libmachine: Decoding PEM data...
	I1211 15:47:07.632375   10929 main.go:141] libmachine: Parsing certificate...
	I1211 15:47:07.632781   10929 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso...
	I1211 15:47:07.843734   10929 main.go:141] libmachine: Creating SSH key...
	I1211 15:47:07.954045   10929 main.go:141] libmachine: Creating Disk image...
	I1211 15:47:07.954053   10929 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1211 15:47:07.954259   10929 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2.raw /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2
	I1211 15:47:07.963686   10929 main.go:141] libmachine: STDOUT: 
	I1211 15:47:07.963708   10929 main.go:141] libmachine: STDERR: 
	I1211 15:47:07.963769   10929 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2 +20000M
	I1211 15:47:07.972011   10929 main.go:141] libmachine: STDOUT: Image resized.
	
	I1211 15:47:07.972030   10929 main.go:141] libmachine: STDERR: 
	I1211 15:47:07.972040   10929 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2
	I1211 15:47:07.972046   10929 main.go:141] libmachine: Starting QEMU VM...
	I1211 15:47:07.972065   10929 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:47:07.972094   10929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:bd:6b:d1:92:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2
	I1211 15:47:07.973902   10929 main.go:141] libmachine: STDOUT: 
	I1211 15:47:07.973920   10929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:47:07.973934   10929 client.go:171] duration metric: took 341.754875ms to LocalClient.Create
	I1211 15:47:09.976041   10929 start.go:128] duration metric: took 2.365134458s to createHost
	I1211 15:47:09.976113   10929 start.go:83] releasing machines lock for "newest-cni-945000", held for 2.36528475s
	W1211 15:47:09.976513   10929 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-945000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-945000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:47:09.993263   10929 out.go:201] 
	W1211 15:47:09.998453   10929 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:47:09.998528   10929 out.go:270] * 
	* 
	W1211 15:47:10.001799   10929 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:47:10.010099   10929 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-945000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-945000 -n newest-cni-945000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-945000 -n newest-cni-945000: exit status 7 (68.122125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-945000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-872000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-872000 create -f testdata/busybox.yaml: exit status 1 (29.337167ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-872000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-872000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000: exit status 7 (35.77525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-872000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000: exit status 7 (37.830875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-872000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-872000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-872000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-872000 describe deploy/metrics-server -n kube-system: exit status 1 (30.636709ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-872000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-872000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000: exit status 7 (36.363084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-872000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-872000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-872000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.18847275s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-872000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-872000" primary control-plane node in "default-k8s-diff-port-872000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-872000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-872000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:47:11.199632   10995 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:47:11.199761   10995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:47:11.199765   10995 out.go:358] Setting ErrFile to fd 2...
	I1211 15:47:11.199767   10995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:47:11.199893   10995 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:47:11.200901   10995 out.go:352] Setting JSON to false
	I1211 15:47:11.218682   10995 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6401,"bootTime":1733954430,"procs":539,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:47:11.218753   10995 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:47:11.223982   10995 out.go:177] * [default-k8s-diff-port-872000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:47:11.230876   10995 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:47:11.230909   10995 notify.go:220] Checking for updates...
	I1211 15:47:11.238998   10995 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:47:11.242018   10995 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:47:11.244966   10995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:47:11.248022   10995 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:47:11.249293   10995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:47:11.252253   10995 config.go:182] Loaded profile config "default-k8s-diff-port-872000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:47:11.252530   10995 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:47:11.255953   10995 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:47:11.260972   10995 start.go:297] selected driver: qemu2
	I1211 15:47:11.260977   10995 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-872000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-872000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:47:11.261020   10995 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:47:11.263511   10995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 15:47:11.263538   10995 cni.go:84] Creating CNI manager for ""
	I1211 15:47:11.263559   10995 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:47:11.263589   10995 start.go:340] cluster config:
	{Name:default-k8s-diff-port-872000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-872000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:47:11.268008   10995 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:47:11.275945   10995 out.go:177] * Starting "default-k8s-diff-port-872000" primary control-plane node in "default-k8s-diff-port-872000" cluster
	I1211 15:47:11.279028   10995 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:47:11.279042   10995 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:47:11.279053   10995 cache.go:56] Caching tarball of preloaded images
	I1211 15:47:11.279121   10995 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:47:11.279127   10995 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:47:11.279174   10995 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/default-k8s-diff-port-872000/config.json ...
	I1211 15:47:11.279671   10995 start.go:360] acquireMachinesLock for default-k8s-diff-port-872000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:47:11.279702   10995 start.go:364] duration metric: took 23.916µs to acquireMachinesLock for "default-k8s-diff-port-872000"
	I1211 15:47:11.279711   10995 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:47:11.279716   10995 fix.go:54] fixHost starting: 
	I1211 15:47:11.279829   10995 fix.go:112] recreateIfNeeded on default-k8s-diff-port-872000: state=Stopped err=<nil>
	W1211 15:47:11.279837   10995 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:47:11.283975   10995 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-872000" ...
	I1211 15:47:11.290946   10995 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:47:11.290978   10995 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:b6:c8:5c:24:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2
	I1211 15:47:11.293177   10995 main.go:141] libmachine: STDOUT: 
	I1211 15:47:11.293196   10995 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:47:11.293226   10995 fix.go:56] duration metric: took 13.50825ms for fixHost
	I1211 15:47:11.293231   10995 start.go:83] releasing machines lock for "default-k8s-diff-port-872000", held for 13.524667ms
	W1211 15:47:11.293237   10995 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:47:11.293281   10995 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:47:11.293285   10995 start.go:729] Will try again in 5 seconds ...
	I1211 15:47:16.295346   10995 start.go:360] acquireMachinesLock for default-k8s-diff-port-872000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:47:16.295920   10995 start.go:364] duration metric: took 463.583µs to acquireMachinesLock for "default-k8s-diff-port-872000"
	I1211 15:47:16.296078   10995 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:47:16.296096   10995 fix.go:54] fixHost starting: 
	I1211 15:47:16.296855   10995 fix.go:112] recreateIfNeeded on default-k8s-diff-port-872000: state=Stopped err=<nil>
	W1211 15:47:16.296882   10995 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:47:16.305707   10995 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-872000" ...
	I1211 15:47:16.311695   10995 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:47:16.311997   10995 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:b6:c8:5c:24:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/default-k8s-diff-port-872000/disk.qcow2
	I1211 15:47:16.322866   10995 main.go:141] libmachine: STDOUT: 
	I1211 15:47:16.322951   10995 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:47:16.323083   10995 fix.go:56] duration metric: took 26.984ms for fixHost
	I1211 15:47:16.323110   10995 start.go:83] releasing machines lock for "default-k8s-diff-port-872000", held for 27.166708ms
	W1211 15:47:16.323371   10995 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-872000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-872000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:47:16.330675   10995 out.go:201] 
	W1211 15:47:16.333766   10995 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:47:16.333811   10995 out.go:270] * 
	* 
	W1211 15:47:16.336443   10995 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:47:16.344677   10995 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-872000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000: exit status 7 (71.07425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-872000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-945000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-945000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.189247333s)

                                                
                                                
-- stdout --
	* [newest-cni-945000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-945000" primary control-plane node in "newest-cni-945000" cluster
	* Restarting existing qemu2 VM for "newest-cni-945000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-945000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:47:13.638197   11016 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:47:13.638352   11016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:47:13.638355   11016 out.go:358] Setting ErrFile to fd 2...
	I1211 15:47:13.638357   11016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:47:13.638501   11016 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:47:13.639565   11016 out.go:352] Setting JSON to false
	I1211 15:47:13.656749   11016 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6403,"bootTime":1733954430,"procs":539,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:47:13.656818   11016 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:47:13.660765   11016 out.go:177] * [newest-cni-945000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:47:13.667717   11016 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:47:13.667759   11016 notify.go:220] Checking for updates...
	I1211 15:47:13.673638   11016 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:47:13.676675   11016 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:47:13.679696   11016 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:47:13.682691   11016 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:47:13.685707   11016 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:47:13.688985   11016 config.go:182] Loaded profile config "newest-cni-945000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:47:13.689265   11016 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:47:13.692666   11016 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:47:13.699748   11016 start.go:297] selected driver: qemu2
	I1211 15:47:13.699753   11016 start.go:901] validating driver "qemu2" against &{Name:newest-cni-945000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:newest-cni-945000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:47:13.699802   11016 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:47:13.702379   11016 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1211 15:47:13.702401   11016 cni.go:84] Creating CNI manager for ""
	I1211 15:47:13.702426   11016 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:47:13.702446   11016 start.go:340] cluster config:
	{Name:newest-cni-945000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-945000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:47:13.706883   11016 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:47:13.715694   11016 out.go:177] * Starting "newest-cni-945000" primary control-plane node in "newest-cni-945000" cluster
	I1211 15:47:13.718704   11016 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:47:13.718719   11016 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:47:13.718728   11016 cache.go:56] Caching tarball of preloaded images
	I1211 15:47:13.718782   11016 preload.go:172] Found /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1211 15:47:13.718788   11016 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:47:13.718839   11016 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/newest-cni-945000/config.json ...
	I1211 15:47:13.719390   11016 start.go:360] acquireMachinesLock for newest-cni-945000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:47:13.719423   11016 start.go:364] duration metric: took 25.833µs to acquireMachinesLock for "newest-cni-945000"
	I1211 15:47:13.719439   11016 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:47:13.719444   11016 fix.go:54] fixHost starting: 
	I1211 15:47:13.719570   11016 fix.go:112] recreateIfNeeded on newest-cni-945000: state=Stopped err=<nil>
	W1211 15:47:13.719578   11016 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:47:13.722766   11016 out.go:177] * Restarting existing qemu2 VM for "newest-cni-945000" ...
	I1211 15:47:13.730720   11016 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:47:13.730763   11016 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:bd:6b:d1:92:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2
	I1211 15:47:13.733159   11016 main.go:141] libmachine: STDOUT: 
	I1211 15:47:13.733178   11016 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:47:13.733208   11016 fix.go:56] duration metric: took 13.762583ms for fixHost
	I1211 15:47:13.733216   11016 start.go:83] releasing machines lock for "newest-cni-945000", held for 13.787667ms
	W1211 15:47:13.733221   11016 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:47:13.733264   11016 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:47:13.733269   11016 start.go:729] Will try again in 5 seconds ...
	I1211 15:47:18.735396   11016 start.go:360] acquireMachinesLock for newest-cni-945000: {Name:mkffc44973b306ec9d7c6618a9963d4c37891d54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 15:47:18.736001   11016 start.go:364] duration metric: took 482.959µs to acquireMachinesLock for "newest-cni-945000"
	I1211 15:47:18.736150   11016 start.go:96] Skipping create...Using existing machine configuration
	I1211 15:47:18.736170   11016 fix.go:54] fixHost starting: 
	I1211 15:47:18.737018   11016 fix.go:112] recreateIfNeeded on newest-cni-945000: state=Stopped err=<nil>
	W1211 15:47:18.737046   11016 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 15:47:18.741889   11016 out.go:177] * Restarting existing qemu2 VM for "newest-cni-945000" ...
	I1211 15:47:18.750563   11016 qemu.go:418] Using hvf for hardware acceleration
	I1211 15:47:18.750798   11016 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:bd:6b:d1:92:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20083-6627/.minikube/machines/newest-cni-945000/disk.qcow2
	I1211 15:47:18.762192   11016 main.go:141] libmachine: STDOUT: 
	I1211 15:47:18.762293   11016 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1211 15:47:18.762379   11016 fix.go:56] duration metric: took 26.211583ms for fixHost
	I1211 15:47:18.762405   11016 start.go:83] releasing machines lock for "newest-cni-945000", held for 26.37975ms
	W1211 15:47:18.762601   11016 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-945000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-945000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1211 15:47:18.769540   11016 out.go:201] 
	W1211 15:47:18.772691   11016 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1211 15:47:18.772726   11016 out.go:270] * 
	* 
	W1211 15:47:18.775134   11016 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:47:18.787588   11016 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-945000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-945000 -n newest-cni-945000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-945000 -n newest-cni-945000: exit status 7 (70.6865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-945000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-872000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000: exit status 7 (35.141792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-872000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-872000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-872000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-872000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.10325ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-872000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-872000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000: exit status 7 (32.093084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-872000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-872000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000: exit status 7 (31.809417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-872000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-872000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-872000 --alsologtostderr -v=1: exit status 83 (43.396584ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-872000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-872000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:47:16.626378   11035 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:47:16.626563   11035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:47:16.626567   11035 out.go:358] Setting ErrFile to fd 2...
	I1211 15:47:16.626569   11035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:47:16.626699   11035 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:47:16.626909   11035 out.go:352] Setting JSON to false
	I1211 15:47:16.626917   11035 mustload.go:65] Loading cluster: default-k8s-diff-port-872000
	I1211 15:47:16.627127   11035 config.go:182] Loaded profile config "default-k8s-diff-port-872000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:47:16.632098   11035 out.go:177] * The control-plane node default-k8s-diff-port-872000 host is not running: state=Stopped
	I1211 15:47:16.636020   11035 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-872000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-872000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000: exit status 7 (32.834875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-872000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000: exit status 7 (31.9555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-872000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-945000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-945000 -n newest-cni-945000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-945000 -n newest-cni-945000: exit status 7 (33.024333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-945000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-945000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-945000 --alsologtostderr -v=1: exit status 83 (43.442083ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-945000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-945000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:47:18.976659   11060 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:47:18.976837   11060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:47:18.976841   11060 out.go:358] Setting ErrFile to fd 2...
	I1211 15:47:18.976843   11060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:47:18.976970   11060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:47:18.977175   11060 out.go:352] Setting JSON to false
	I1211 15:47:18.977184   11060 mustload.go:65] Loading cluster: newest-cni-945000
	I1211 15:47:18.977398   11060 config.go:182] Loaded profile config "newest-cni-945000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:47:18.981833   11060 out.go:177] * The control-plane node newest-cni-945000 host is not running: state=Stopped
	I1211 15:47:18.985912   11060 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-945000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-945000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-945000 -n newest-cni-945000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-945000 -n newest-cni-945000: exit status 7 (32.571958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-945000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-945000 -n newest-cni-945000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-945000 -n newest-cni-945000: exit status 7 (32.781208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-945000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.11
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.2/json-events 11.44
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.09
18 TestDownloadOnly/v1.31.2/DeleteAll 0.12
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
35 TestHyperKitDriverInstallOrUpdate 10.79
39 TestErrorSpam/start 0.4
40 TestErrorSpam/status 0.1
41 TestErrorSpam/pause 0.13
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 7.23
46 TestFunctional/serial/CopySyncFile 0.01
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.96
55 TestFunctional/serial/CacheCmd/cache/add_local 1.08
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/parallel/ConfigCmd 0.24
71 TestFunctional/parallel/DryRun 0.24
72 TestFunctional/parallel/InternationalLanguage 0.12
78 TestFunctional/parallel/AddonsCmd 0.1
93 TestFunctional/parallel/License 0.29
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
107 TestFunctional/parallel/ProfileCmd/profile_list 0.09
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
112 TestFunctional/parallel/Version/short 0.04
119 TestFunctional/parallel/ImageCommands/Setup 1.92
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 1.8
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.21
193 TestMainNoArgs 0.04
238 TestStoppedBinaryUpgrade/Setup 1.3
240 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
258 TestNoKubernetes/serial/ProfileList 0.11
259 TestNoKubernetes/serial/Stop 3.23
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
275 TestStartStop/group/old-k8s-version/serial/Stop 1.86
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
286 TestStartStop/group/no-preload/serial/Stop 3.68
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
297 TestStartStop/group/embed-certs/serial/Stop 3.41
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.21
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
313 TestStartStop/group/newest-cni/serial/Stop 3.32
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1211 15:21:53.639389    7135 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1211 15:21:53.639783    7135 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-273000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-273000: exit status 85 (105.890667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-273000 | jenkins | v1.34.0 | 11 Dec 24 15:21 PST |          |
	|         | -p download-only-273000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 15:21:30
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 15:21:30.303049    7136 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:21:30.303231    7136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:21:30.303234    7136 out.go:358] Setting ErrFile to fd 2...
	I1211 15:21:30.303237    7136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:21:30.303375    7136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	W1211 15:21:30.303480    7136 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20083-6627/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20083-6627/.minikube/config/config.json: no such file or directory
	I1211 15:21:30.304945    7136 out.go:352] Setting JSON to true
	I1211 15:21:30.322760    7136 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4860,"bootTime":1733954430,"procs":540,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:21:30.322838    7136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:21:30.328810    7136 out.go:97] [download-only-273000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:21:30.328978    7136 notify.go:220] Checking for updates...
	W1211 15:21:30.329047    7136 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball: no such file or directory
	I1211 15:21:30.331804    7136 out.go:169] MINIKUBE_LOCATION=20083
	I1211 15:21:30.333495    7136 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:21:30.338867    7136 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:21:30.342901    7136 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:21:30.346846    7136 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	W1211 15:21:30.352860    7136 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 15:21:30.353118    7136 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:21:30.356778    7136 out.go:97] Using the qemu2 driver based on user configuration
	I1211 15:21:30.356799    7136 start.go:297] selected driver: qemu2
	I1211 15:21:30.356803    7136 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:21:30.356872    7136 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:21:30.359818    7136 out.go:169] Automatically selected the socket_vmnet network
	I1211 15:21:30.366433    7136 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1211 15:21:30.366530    7136 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 15:21:30.366569    7136 cni.go:84] Creating CNI manager for ""
	I1211 15:21:30.366603    7136 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1211 15:21:30.366652    7136 start.go:340] cluster config:
	{Name:download-only-273000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-273000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:21:30.371278    7136 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:21:30.374898    7136 out.go:97] Downloading VM boot image ...
	I1211 15:21:30.374926    7136 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/iso/arm64/minikube-v1.34.0-1733936888-20083-arm64.iso
	I1211 15:21:39.669827    7136 out.go:97] Starting "download-only-273000" primary control-plane node in "download-only-273000" cluster
	I1211 15:21:39.669866    7136 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1211 15:21:39.724778    7136 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1211 15:21:39.724785    7136 cache.go:56] Caching tarball of preloaded images
	I1211 15:21:39.725043    7136 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1211 15:21:39.732136    7136 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1211 15:21:39.732143    7136 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1211 15:21:39.815808    7136 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1211 15:21:52.307798    7136 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1211 15:21:52.307966    7136 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1211 15:21:53.002496    7136 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1211 15:21:53.002694    7136 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/download-only-273000/config.json ...
	I1211 15:21:53.002710    7136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/download-only-273000/config.json: {Name:mk8d33b5e53b9e4b65834ca6cf10315c93caa2b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:21:53.002986    7136 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1211 15:21:53.003225    7136 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1211 15:21:53.599898    7136 out.go:193] 
	W1211 15:21:53.604976    7136 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20083-6627/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109798380 0x109798380 0x109798380 0x109798380 0x109798380 0x109798380 0x109798380] Decompressors:map[bz2:0x140007204b0 gz:0x140007204b8 tar:0x14000720450 tar.bz2:0x14000720460 tar.gz:0x14000720480 tar.xz:0x14000720490 tar.zst:0x140007204a0 tbz2:0x14000720460 tgz:0x14000720480 txz:0x14000720490 tzst:0x140007204a0 xz:0x140007204c0 zip:0x140007204d0 zst:0x140007204c8] Getters:map[file:0x140005a08f0 http:0x140004ae5f0 https:0x140004ae640] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1211 15:21:53.605001    7136 out_reason.go:110] 
	W1211 15:21:53.610876    7136 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 15:21:53.613954    7136 out.go:193] 
	
	
	* The control-plane node download-only-273000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-273000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-273000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (11.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-352000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-352000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 : (11.4354195s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (11.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1211 15:22:05.463050    7135 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1211 15:22:05.463112    7135 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-352000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-352000: exit status 85 (85.049125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-273000 | jenkins | v1.34.0 | 11 Dec 24 15:21 PST |                     |
	|         | -p download-only-273000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 11 Dec 24 15:21 PST | 11 Dec 24 15:21 PST |
	| delete  | -p download-only-273000        | download-only-273000 | jenkins | v1.34.0 | 11 Dec 24 15:21 PST | 11 Dec 24 15:21 PST |
	| start   | -o=json --download-only        | download-only-352000 | jenkins | v1.34.0 | 11 Dec 24 15:21 PST |                     |
	|         | -p download-only-352000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 15:21:54
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 15:21:54.060032    7163 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:21:54.060224    7163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:21:54.060227    7163 out.go:358] Setting ErrFile to fd 2...
	I1211 15:21:54.060230    7163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:21:54.060373    7163 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:21:54.061567    7163 out.go:352] Setting JSON to true
	I1211 15:21:54.079319    7163 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4884,"bootTime":1733954430,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:21:54.079387    7163 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:21:54.084597    7163 out.go:97] [download-only-352000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:21:54.084704    7163 notify.go:220] Checking for updates...
	I1211 15:21:54.088390    7163 out.go:169] MINIKUBE_LOCATION=20083
	I1211 15:21:54.091437    7163 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:21:54.094360    7163 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:21:54.098415    7163 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:21:54.101473    7163 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	W1211 15:21:54.107395    7163 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 15:21:54.107590    7163 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:21:54.110384    7163 out.go:97] Using the qemu2 driver based on user configuration
	I1211 15:21:54.110393    7163 start.go:297] selected driver: qemu2
	I1211 15:21:54.110396    7163 start.go:901] validating driver "qemu2" against <nil>
	I1211 15:21:54.110440    7163 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 15:21:54.111791    7163 out.go:169] Automatically selected the socket_vmnet network
	I1211 15:21:54.118723    7163 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1211 15:21:54.118828    7163 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 15:21:54.118848    7163 cni.go:84] Creating CNI manager for ""
	I1211 15:21:54.118873    7163 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1211 15:21:54.118879    7163 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 15:21:54.118930    7163 start.go:340] cluster config:
	{Name:download-only-352000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-352000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:21:54.123199    7163 iso.go:125] acquiring lock: {Name:mk6d189250a97e9b25ad80600365a870e8f980a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 15:21:54.126445    7163 out.go:97] Starting "download-only-352000" primary control-plane node in "download-only-352000" cluster
	I1211 15:21:54.126455    7163 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:21:54.184365    7163 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:21:54.184381    7163 cache.go:56] Caching tarball of preloaded images
	I1211 15:21:54.184583    7163 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:21:54.188039    7163 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1211 15:21:54.188046    7163 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1211 15:21:54.265526    7163 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4?checksum=md5:5f3d7369b12f47138aa2863bb7bda916 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1211 15:22:03.207590    7163 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1211 15:22:03.207769    7163 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1211 15:22:03.729466    7163 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1211 15:22:03.729663    7163 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/download-only-352000/config.json ...
	I1211 15:22:03.729683    7163 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20083-6627/.minikube/profiles/download-only-352000/config.json: {Name:mk9e902abd415c38f312a81f6e1cbffe8050dc24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 15:22:03.731031    7163 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1211 15:22:03.731222    7163 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20083-6627/.minikube/cache/darwin/arm64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-352000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-352000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-352000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
I1211 15:22:05.997517    7135 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-893000 --alsologtostderr --binary-mirror http://127.0.0.1:61210 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-893000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-893000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-645000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-645000: exit status 85 (64.143042ms)

                                                
                                                
-- stdout --
	* Profile "addons-645000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-645000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-645000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-645000: exit status 85 (65.296958ms)

                                                
                                                
-- stdout --
	* Profile "addons-645000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-645000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.79s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1211 15:43:27.665729    7135 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1211 15:43:27.665916    7135 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1211 15:43:29.597923    7135 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1211 15:43:29.598192    7135 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1211 15:43:29.598244    7135 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.79s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 status: exit status 7 (35.2015ms)

                                                
                                                
-- stdout --
	nospam-911000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 status: exit status 7 (34.554125ms)

                                                
                                                
-- stdout --
	nospam-911000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 status: exit status 7 (34.396084ms)

                                                
                                                
-- stdout --
	nospam-911000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 pause: exit status 83 (43.148542ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-911000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-911000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 pause: exit status 83 (44.831875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-911000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-911000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 pause: exit status 83 (44.903083ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-911000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-911000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 unpause: exit status 83 (45.723125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-911000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-911000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 unpause: exit status 83 (43.387ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-911000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-911000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 unpause: exit status 83 (45.273042ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-911000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-911000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (7.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 stop: (3.449052791s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 stop: (1.840880666s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-911000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-911000 stop: (1.940186542s)
--- PASS: TestErrorSpam/stop (7.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/20083-6627/.minikube/files/etc/test/nested/copy/7135/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-749000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3279020562/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 cache add minikube-local-cache-test:functional-749000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 cache delete minikube-local-cache-test:functional-749000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-749000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 config get cpus: exit status 14 (35.952042ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 config get cpus: exit status 14 (33.354875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-749000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-749000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (119.668542ms)

                                                
                                                
-- stdout --
	* [functional-749000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:23:39.456088    7627 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:23:39.456240    7627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:23:39.456243    7627 out.go:358] Setting ErrFile to fd 2...
	I1211 15:23:39.456245    7627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:23:39.456390    7627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:23:39.457457    7627 out.go:352] Setting JSON to false
	I1211 15:23:39.476013    7627 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4989,"bootTime":1733954430,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:23:39.476082    7627 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:23:39.480040    7627 out.go:177] * [functional-749000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1211 15:23:39.487084    7627 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:23:39.487134    7627 notify.go:220] Checking for updates...
	I1211 15:23:39.494054    7627 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:23:39.497052    7627 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:23:39.499961    7627 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:23:39.503047    7627 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:23:39.506028    7627 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:23:39.507710    7627 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:23:39.507981    7627 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:23:39.512048    7627 out.go:177] * Using the qemu2 driver based on existing profile
	I1211 15:23:39.518917    7627 start.go:297] selected driver: qemu2
	I1211 15:23:39.518923    7627 start.go:901] validating driver "qemu2" against &{Name:functional-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:23:39.518985    7627 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:23:39.526006    7627 out.go:201] 
	W1211 15:23:39.530079    7627 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1211 15:23:39.532997    7627 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-749000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-749000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-749000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (117.563959ms)

                                                
                                                
-- stdout --
	* [functional-749000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 15:23:39.331865    7623 out.go:345] Setting OutFile to fd 1 ...
	I1211 15:23:39.332007    7623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:23:39.332010    7623 out.go:358] Setting ErrFile to fd 2...
	I1211 15:23:39.332013    7623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 15:23:39.332140    7623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20083-6627/.minikube/bin
	I1211 15:23:39.333674    7623 out.go:352] Setting JSON to false
	I1211 15:23:39.352051    7623 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4989,"bootTime":1733954430,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1211 15:23:39.352129    7623 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1211 15:23:39.357089    7623 out.go:177] * [functional-749000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1211 15:23:39.364029    7623 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 15:23:39.364094    7623 notify.go:220] Checking for updates...
	I1211 15:23:39.371026    7623 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	I1211 15:23:39.374033    7623 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1211 15:23:39.377066    7623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 15:23:39.380096    7623 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	I1211 15:23:39.383038    7623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 15:23:39.386317    7623 config.go:182] Loaded profile config "functional-749000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1211 15:23:39.386588    7623 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 15:23:39.390970    7623 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1211 15:23:39.398028    7623 start.go:297] selected driver: qemu2
	I1211 15:23:39.398034    7623 start.go:901] validating driver "qemu2" against &{Name:functional-749000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-749000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 15:23:39.398093    7623 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 15:23:39.404882    7623 out.go:201] 
	W1211 15:23:39.409031    7623 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1211 15:23:39.413120    7623 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-749000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "51.8775ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "38.849667ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "53.016375ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "38.797292ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.888171833s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-749000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image rm kicbase/echo-server:functional-749000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-749000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 image save --daemon kicbase/echo-server:functional-749000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-749000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012521209s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-749000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-749000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-749000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-749000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-177000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-177000 --output=json --user=testUser: (1.795533625s)
--- PASS: TestJSONOutput/stop/Command (1.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-510000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-510000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.808458ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"91791ace-8d35-4c5d-99d5-aa89c656a821","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-510000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d34f852d-5df7-44cb-93af-6b3734d1bf7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20083"}}
	{"specversion":"1.0","id":"cb2af689-13bc-4af5-9751-889dd469e90f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig"}}
	{"specversion":"1.0","id":"173a9643-710c-439f-b36b-f0406f043852","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d498f14b-b9af-4723-bea0-f83191a237c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"58a2640a-2f36-443d-8073-076d2a21eb81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube"}}
	{"specversion":"1.0","id":"475def38-8bbe-4bd8-93f5-5b48ef49e9e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6c337e7b-fc97-47bc-ae98-87b5c724ddd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-510000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-510000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-684000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-237000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-237000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (104.575541ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-237000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20083
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20083-6627/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20083-6627/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-237000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-237000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (46.406042ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-237000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-237000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-237000
I1211 15:43:33.449452    7135 install.go:79] stdout: 
W1211 15:43:33.449630    7135 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1211 15:43:33.449657    7135 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/001/docker-machine-driver-hyperkit]
I1211 15:43:33.467044    7135 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/001/docker-machine-driver-hyperkit]
I1211 15:43:33.480787    7135 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/001/docker-machine-driver-hyperkit]
I1211 15:43:33.492583    7135 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3661743938/001/docker-machine-driver-hyperkit]
I1211 15:43:33.513757    7135 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1211 15:43:33.513872    7135 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-237000: (3.232005416s)
--- PASS: TestNoKubernetes/serial/Stop (3.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-237000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-237000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (50.65125ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-237000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-237000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-634000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-634000 --alsologtostderr -v=3: (1.861445292s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (63.71775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-634000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-854000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-854000 --alsologtostderr -v=3: (3.676658834s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-854000 -n no-preload-854000: exit status 7 (61.3995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-854000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-089000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-089000 --alsologtostderr -v=3: (3.408983833s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (61.348334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-089000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-872000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-872000 --alsologtostderr -v=3: (3.211322292s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-945000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-945000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-945000 --alsologtostderr -v=3: (3.3230995s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-872000 -n default-k8s-diff-port-872000: exit status 7 (57.030375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-872000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-945000 -n newest-cni-945000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-945000 -n newest-cni-945000: exit status 7 (58.850875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-945000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-749000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4139883880/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733959382335481000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4139883880/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733959382335481000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4139883880/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733959382335481000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4139883880/001/test-1733959382335481000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (60.803542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:02.396825    7135 retry.go:31] will retry after 414.355592ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.887167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:02.904490    7135 retry.go:31] will retry after 930.078786ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (94.745541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:03.931684    7135 retry.go:31] will retry after 648.426385ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.286292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:04.673879    7135 retry.go:31] will retry after 2.284884795s: exit status 83
I1211 15:23:05.844583    7135 retry.go:31] will retry after 2.489951204s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (94.409625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:07.055644    7135 retry.go:31] will retry after 3.151315531s: exit status 83
I1211 15:23:08.336808    7135 retry.go:31] will retry after 9.206482915s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.805375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:10.300136    7135 retry.go:31] will retry after 4.305996457s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.914958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "sudo umount -f /mount-9p": exit status 83 (48.826084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-749000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-749000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4139883880/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-749000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4202721675/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (64.658875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:14.934788    7135 retry.go:31] will retry after 732.774173ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.398584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:15.758384    7135 retry.go:31] will retry after 605.594641ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.144875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:16.455605    7135 retry.go:31] will retry after 1.109402586s: exit status 83
I1211 15:23:17.545498    7135 retry.go:31] will retry after 9.878152727s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.555208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:17.658881    7135 retry.go:31] will retry after 1.980265377s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.763833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:19.733234    7135 retry.go:31] will retry after 2.887825602s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.445958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:22.712873    7135 retry.go:31] will retry after 3.588620875s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.634375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "sudo umount -f /mount-9p": exit status 83 (46.831041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-749000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-749000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4202721675/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-749000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3986890528/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-749000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3986890528/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-749000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3986890528/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1: exit status 83 (86.48925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:26.649642    7135 retry.go:31] will retry after 314.508872ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1: exit status 83 (91.602708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:27.058084    7135 retry.go:31] will retry after 1.010006951s: exit status 83
I1211 15:23:27.425867    7135 retry.go:31] will retry after 9.838790157s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1: exit status 83 (92.056417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:28.162560    7135 retry.go:31] will retry after 1.381935451s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1: exit status 83 (89.890625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:29.636700    7135 retry.go:31] will retry after 1.166593778s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1: exit status 83 (91.644416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:30.897375    7135 retry.go:31] will retry after 3.633755788s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1: exit status 83 (91.419ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
I1211 15:23:34.624836    7135 retry.go:31] will retry after 4.014508617s: exit status 83
I1211 15:23:37.266934    7135 retry.go:31] will retry after 23.933864738s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-749000 ssh "findmnt -T" /mount1: exit status 83 (90.066875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-749000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-749000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-749000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3986890528/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-749000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3986890528/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-749000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3986890528/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.57s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-736000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-736000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-736000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-736000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-736000"

                                                
                                                
----------------------- debugLogs end: cilium-736000 [took: 2.389925333s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-736000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-736000
--- SKIP: TestNetworkPlugins/group/cilium (2.50s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-980000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-980000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard