Test Report: QEMU_macOS 18284

                    
                      5ddb71fe0a42bb2133c9d6493465817bfdb3ae9e:2024-03-04:33407
                    
                

Test fail (140/251)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 39.4
7 TestDownloadOnly/v1.16.0/kubectl 0
31 TestOffline 10.59
36 TestAddons/Setup 10.8
37 TestCertOptions 10.04
38 TestCertExpiration 195.28
39 TestDockerFlags 10.1
40 TestForceSystemdFlag 10.07
41 TestForceSystemdEnv 12.2
47 TestErrorSpam/setup 10.09
56 TestFunctional/serial/StartWithProxy 9.88
58 TestFunctional/serial/SoftStart 5.27
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
70 TestFunctional/serial/MinikubeKubectlCmd 0.55
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.71
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.08
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.13
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.13
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 89.29
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.41
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.5
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 35.44
150 TestImageBuild/serial/Setup 9.96
152 TestIngressAddonLegacy/StartLegacyK8sCluster 33.16
154 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 0.13
156 TestIngressAddonLegacy/serial/ValidateIngressAddons 0.03
159 TestJSONOutput/start/Command 9.86
165 TestJSONOutput/pause/Command 0.08
171 TestJSONOutput/unpause/Command 0.05
188 TestMinikubeProfile 10.32
191 TestMountStart/serial/StartWithMountFirst 11.08
194 TestMultiNode/serial/FreshStart2Nodes 10.1
195 TestMultiNode/serial/DeployApp2Nodes 115.66
196 TestMultiNode/serial/PingHostFrom2Pods 0.09
197 TestMultiNode/serial/AddNode 0.08
198 TestMultiNode/serial/MultiNodeLabels 0.06
199 TestMultiNode/serial/ProfileList 0.11
200 TestMultiNode/serial/CopyFile 0.06
201 TestMultiNode/serial/StopNode 0.15
202 TestMultiNode/serial/StartAfterStop 0.12
203 TestMultiNode/serial/RestartKeepsNodes 5.38
204 TestMultiNode/serial/DeleteNode 0.11
205 TestMultiNode/serial/StopMultiNode 0.16
206 TestMultiNode/serial/RestartMultiNode 5.27
207 TestMultiNode/serial/ValidateNameConflict 21.61
211 TestPreload 10.16
213 TestScheduledStopUnix 10.04
214 TestSkaffold 16.54
217 TestRunningBinaryUpgrade 658.52
219 TestKubernetesUpgrade 15.64
232 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.44
233 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.42
235 TestStoppedBinaryUpgrade/Upgrade 611.09
237 TestPause/serial/Start 10.05
247 TestNoKubernetes/serial/StartWithK8s 9.88
248 TestNoKubernetes/serial/StartWithStopK8s 5.9
249 TestNoKubernetes/serial/Start 5.88
253 TestNoKubernetes/serial/StartNoArgs 5.89
255 TestNetworkPlugins/group/auto/Start 9.96
256 TestNetworkPlugins/group/kindnet/Start 10.02
257 TestNetworkPlugins/group/calico/Start 9.85
258 TestNetworkPlugins/group/custom-flannel/Start 9.79
259 TestNetworkPlugins/group/false/Start 9.74
260 TestNetworkPlugins/group/enable-default-cni/Start 9.86
261 TestNetworkPlugins/group/flannel/Start 9.76
263 TestNetworkPlugins/group/bridge/Start 9.98
264 TestNetworkPlugins/group/kubenet/Start 9.77
266 TestStartStop/group/old-k8s-version/serial/FirstStart 11.83
268 TestStartStop/group/no-preload/serial/FirstStart 9.92
269 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
270 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
273 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
274 TestStartStop/group/no-preload/serial/DeployApp 0.09
275 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
278 TestStartStop/group/no-preload/serial/SecondStart 5.3
279 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
280 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
281 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
282 TestStartStop/group/old-k8s-version/serial/Pause 0.11
284 TestStartStop/group/embed-certs/serial/FirstStart 10.27
285 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
286 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
287 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
288 TestStartStop/group/no-preload/serial/Pause 0.11
290 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.84
291 TestStartStop/group/embed-certs/serial/DeployApp 0.09
292 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
295 TestStartStop/group/embed-certs/serial/SecondStart 5.22
296 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
300 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
301 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
302 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
303 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
304 TestStartStop/group/embed-certs/serial/Pause 0.11
306 TestStartStop/group/newest-cni/serial/FirstStart 9.98
307 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
308 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
315 TestStartStop/group/newest-cni/serial/SecondStart 5.27
318 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
319 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.16.0/json-events (39.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-150000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-150000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (39.403440875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0e9a5122-975e-436a-8dc1-bc27c3f66c7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-150000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1f844d0-2129-4d01-89c8-f06e548967bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18284"}}
	{"specversion":"1.0","id":"29ce2459-95a0-4a8e-b813-63789f04181c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig"}}
	{"specversion":"1.0","id":"aea4c2f2-8c83-4473-bb8e-ead6feda5e85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"0ca055df-4f68-422c-b6b7-95f4d7754144","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1145cab4-70b0-4c71-a290-42591c30b6c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube"}}
	{"specversion":"1.0","id":"e808d96c-d527-42b4-aec6-2fc2ba22505a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"e2182e45-8888-4eb1-b5fb-a061f87c0b15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb3f55b2-b08f-4b49-9d1e-1e8d875d9fc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c6f8911c-afe4-440e-b97f-c44045968f99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5ccb7c75-4d25-450e-b44e-c49101a1497c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-150000 in cluster download-only-150000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0c9dc95-cc5d-4a5f-a732-81ef79ce1553","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9e47d36-8fc3-4cc1-8eaf-0b35eb83e80d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10836f340 0x10836f340 0x10836f340 0x10836f340 0x10836f340 0x10836f340 0x10836f340] Decompressors:map[bz2:0x140004640d0 gz:0x140004640d8 tar:0x1400000ffe0 tar.bz2:0x1400000fff0 tar.gz:0x14000464090 tar.xz:0x140004640a0 tar.zst:0x140004640c0 tbz2:0x1400000fff0 tgz:0x14000
464090 txz:0x140004640a0 tzst:0x140004640c0 xz:0x140004640e0 zip:0x14000464120 zst:0x140004640e8] Getters:map[file:0x140020ac7c0 http:0x1400072cb40 https:0x1400072cb90] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"99265730-2bae-4862-b8b8-d4699a2f75ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:04:13.502115   15488 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:04:13.502260   15488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:04:13.502264   15488 out.go:304] Setting ErrFile to fd 2...
	I0304 04:04:13.502266   15488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:04:13.502400   15488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	W0304 04:04:13.502485   15488 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18284-15061/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18284-15061/.minikube/config/config.json: no such file or directory
	I0304 04:04:13.503756   15488 out.go:298] Setting JSON to true
	I0304 04:04:13.521043   15488 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9225,"bootTime":1709544628,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:04:13.521120   15488 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:04:13.526733   15488 out.go:97] [download-only-150000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:04:13.530535   15488 out.go:169] MINIKUBE_LOCATION=18284
	W0304 04:04:13.526880   15488 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball: no such file or directory
	I0304 04:04:13.526905   15488 notify.go:220] Checking for updates...
	I0304 04:04:13.536628   15488 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:04:13.538116   15488 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:04:13.541688   15488 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:04:13.544691   15488 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	W0304 04:04:13.550630   15488 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0304 04:04:13.550828   15488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:04:13.553617   15488 out.go:97] Using the qemu2 driver based on user configuration
	I0304 04:04:13.553623   15488 start.go:299] selected driver: qemu2
	I0304 04:04:13.553625   15488 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:04:13.553673   15488 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:04:13.556659   15488 out.go:169] Automatically selected the socket_vmnet network
	I0304 04:04:13.561965   15488 start_flags.go:394] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0304 04:04:13.562070   15488 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0304 04:04:13.562161   15488 cni.go:84] Creating CNI manager for ""
	I0304 04:04:13.562180   15488 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0304 04:04:13.562185   15488 start_flags.go:323] config:
	{Name:download-only-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-150000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:04:13.567124   15488 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:04:13.571744   15488 out.go:97] Downloading VM boot image ...
	I0304 04:04:13.571783   15488 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso
	I0304 04:04:31.838820   15488 out.go:97] Starting control plane node download-only-150000 in cluster download-only-150000
	I0304 04:04:31.838848   15488 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0304 04:04:32.106764   15488 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0304 04:04:32.106821   15488 cache.go:56] Caching tarball of preloaded images
	I0304 04:04:32.107949   15488 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0304 04:04:32.113280   15488 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0304 04:04:32.113313   15488 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:04:32.713495   15488 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0304 04:04:51.553248   15488 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:04:51.553426   15488 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:04:52.193112   15488 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0304 04:04:52.193347   15488 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/download-only-150000/config.json ...
	I0304 04:04:52.193372   15488 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/download-only-150000/config.json: {Name:mkd0fc447bd9345c4f75479b3dc3e9e060131ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:04:52.194539   15488 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0304 04:04:52.194745   15488 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0304 04:04:52.828470   15488 out.go:169] 
	W0304 04:04:52.832586   15488 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10836f340 0x10836f340 0x10836f340 0x10836f340 0x10836f340 0x10836f340 0x10836f340] Decompressors:map[bz2:0x140004640d0 gz:0x140004640d8 tar:0x1400000ffe0 tar.bz2:0x1400000fff0 tar.gz:0x14000464090 tar.xz:0x140004640a0 tar.zst:0x140004640c0 tbz2:0x1400000fff0 tgz:0x14000464090 txz:0x140004640a0 tzst:0x140004640c0 xz:0x140004640e0 zip:0x14000464120 zst:0x140004640e8] Getters:map[file:0x140020ac7c0 http:0x1400072cb40 https:0x1400072cb90] Dir:false ProgressListene
r:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0304 04:04:52.832621   15488 out_reason.go:110] 
	W0304 04:04:52.840398   15488 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:04:52.844506   15488 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-150000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (39.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-878000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-878000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.409445208s)

                                                
                                                
-- stdout --
	* [offline-docker-878000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-878000 in cluster offline-docker-878000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-878000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:13:33.145020   16846 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:13:33.145166   16846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:13:33.145170   16846 out.go:304] Setting ErrFile to fd 2...
	I0304 04:13:33.145172   16846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:13:33.145310   16846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:13:33.146582   16846 out.go:298] Setting JSON to false
	I0304 04:13:33.164546   16846 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9785,"bootTime":1709544628,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:13:33.164625   16846 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:13:33.169599   16846 out.go:177] * [offline-docker-878000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:13:33.177721   16846 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:13:33.177728   16846 notify.go:220] Checking for updates...
	I0304 04:13:33.180578   16846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:13:33.183585   16846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:13:33.186624   16846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:13:33.189534   16846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:13:33.192609   16846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:13:33.196069   16846 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:13:33.196119   16846 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:13:33.200525   16846 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:13:33.207628   16846 start.go:299] selected driver: qemu2
	I0304 04:13:33.207642   16846 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:13:33.207651   16846 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:13:33.209704   16846 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:13:33.212604   16846 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:13:33.215636   16846 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:13:33.215668   16846 cni.go:84] Creating CNI manager for ""
	I0304 04:13:33.215679   16846 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:13:33.215683   16846 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:13:33.215692   16846 start_flags.go:323] config:
	{Name:offline-docker-878000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-878000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:13:33.220291   16846 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:13:33.227568   16846 out.go:177] * Starting control plane node offline-docker-878000 in cluster offline-docker-878000
	I0304 04:13:33.231608   16846 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:13:33.231663   16846 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:13:33.231672   16846 cache.go:56] Caching tarball of preloaded images
	I0304 04:13:33.231745   16846 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:13:33.231751   16846 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:13:33.231816   16846 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/offline-docker-878000/config.json ...
	I0304 04:13:33.231826   16846 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/offline-docker-878000/config.json: {Name:mk0229b3936caf20df3f558622738e0801ed53e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:13:33.232110   16846 start.go:365] acquiring machines lock for offline-docker-878000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:13:33.232145   16846 start.go:369] acquired machines lock for "offline-docker-878000" in 23.458µs
	I0304 04:13:33.232155   16846 start.go:93] Provisioning new machine with config: &{Name:offline-docker-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-878000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:13:33.232207   16846 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:13:33.236525   16846 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0304 04:13:33.251796   16846 start.go:159] libmachine.API.Create for "offline-docker-878000" (driver="qemu2")
	I0304 04:13:33.251826   16846 client.go:168] LocalClient.Create starting
	I0304 04:13:33.251889   16846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:13:33.251919   16846 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:33.251928   16846 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:33.251970   16846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:13:33.251992   16846 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:33.251999   16846 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:33.252366   16846 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:13:33.395281   16846 main.go:141] libmachine: Creating SSH key...
	I0304 04:13:33.558910   16846 main.go:141] libmachine: Creating Disk image...
	I0304 04:13:33.558920   16846 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:13:33.559169   16846 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/disk.qcow2
	I0304 04:13:33.577615   16846 main.go:141] libmachine: STDOUT: 
	I0304 04:13:33.577651   16846 main.go:141] libmachine: STDERR: 
	I0304 04:13:33.577729   16846 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/disk.qcow2 +20000M
	I0304 04:13:33.590549   16846 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:13:33.590572   16846 main.go:141] libmachine: STDERR: 
	I0304 04:13:33.590593   16846 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/disk.qcow2
	I0304 04:13:33.590598   16846 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:13:33.590628   16846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:1d:e4:9c:59:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/disk.qcow2
	I0304 04:13:33.592558   16846 main.go:141] libmachine: STDOUT: 
	I0304 04:13:33.592578   16846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:13:33.592597   16846 client.go:171] LocalClient.Create took 340.76825ms
	I0304 04:13:35.594656   16846 start.go:128] duration metric: createHost completed in 2.36245525s
	I0304 04:13:35.594673   16846 start.go:83] releasing machines lock for "offline-docker-878000", held for 2.362538167s
	W0304 04:13:35.594691   16846 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:13:35.607164   16846 out.go:177] * Deleting "offline-docker-878000" in qemu2 ...
	W0304 04:13:35.620297   16846 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:13:35.620304   16846 start.go:709] Will try again in 5 seconds ...
	I0304 04:13:40.622358   16846 start.go:365] acquiring machines lock for offline-docker-878000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:13:40.622519   16846 start.go:369] acquired machines lock for "offline-docker-878000" in 129.666µs
	I0304 04:13:40.622545   16846 start.go:93] Provisioning new machine with config: &{Name:offline-docker-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-878000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:13:40.622589   16846 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:13:40.712308   16846 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0304 04:13:40.867291   16846 start.go:159] libmachine.API.Create for "offline-docker-878000" (driver="qemu2")
	I0304 04:13:40.867326   16846 client.go:168] LocalClient.Create starting
	I0304 04:13:40.867426   16846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:13:40.867466   16846 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:40.867478   16846 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:40.867513   16846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:13:40.867537   16846 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:40.867544   16846 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:40.867780   16846 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:13:41.273258   16846 main.go:141] libmachine: Creating SSH key...
	I0304 04:13:41.440910   16846 main.go:141] libmachine: Creating Disk image...
	I0304 04:13:41.440927   16846 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:13:41.443731   16846 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/disk.qcow2
	I0304 04:13:41.461277   16846 main.go:141] libmachine: STDOUT: 
	I0304 04:13:41.461308   16846 main.go:141] libmachine: STDERR: 
	I0304 04:13:41.461378   16846 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/disk.qcow2 +20000M
	I0304 04:13:41.474889   16846 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:13:41.474911   16846 main.go:141] libmachine: STDERR: 
	I0304 04:13:41.474924   16846 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/disk.qcow2
	I0304 04:13:41.474929   16846 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:13:41.474961   16846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:6e:b2:73:5a:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/offline-docker-878000/disk.qcow2
	I0304 04:13:41.477432   16846 main.go:141] libmachine: STDOUT: 
	I0304 04:13:41.477452   16846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:13:41.477465   16846 client.go:171] LocalClient.Create took 610.137792ms
	I0304 04:13:43.479642   16846 start.go:128] duration metric: createHost completed in 2.857040541s
	I0304 04:13:43.479706   16846 start.go:83] releasing machines lock for "offline-docker-878000", held for 2.857193834s
	W0304 04:13:43.480052   16846 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-878000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-878000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:13:43.489417   16846 out.go:177] 
	W0304 04:13:43.492448   16846 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:13:43.492638   16846 out.go:239] * 
	* 
	W0304 04:13:43.494473   16846 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:13:43.507409   16846 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-878000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-04 04:13:43.524007 -0800 PST m=+570.119479335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-878000 -n offline-docker-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-878000 -n offline-docker-878000: exit status 7 (73.404917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-878000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-878000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-878000
--- FAIL: TestOffline (10.59s)

                                                
                                    
x
+
TestAddons/Setup (10.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-038000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-038000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.799156166s)

                                                
                                                
-- stdout --
	* [addons-038000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-038000 in cluster addons-038000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-038000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:05:36.838494   15662 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:05:36.838619   15662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:05:36.838623   15662 out.go:304] Setting ErrFile to fd 2...
	I0304 04:05:36.838625   15662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:05:36.838756   15662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:05:36.839819   15662 out.go:298] Setting JSON to false
	I0304 04:05:36.855887   15662 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9308,"bootTime":1709544628,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:05:36.855949   15662 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:05:36.859476   15662 out.go:177] * [addons-038000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:05:36.866544   15662 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:05:36.866611   15662 notify.go:220] Checking for updates...
	I0304 04:05:36.872461   15662 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:05:36.875540   15662 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:05:36.877007   15662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:05:36.880465   15662 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:05:36.883472   15662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:05:36.886738   15662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:05:36.890455   15662 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:05:36.897492   15662 start.go:299] selected driver: qemu2
	I0304 04:05:36.897498   15662 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:05:36.897503   15662 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:05:36.899720   15662 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:05:36.902474   15662 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:05:36.905586   15662 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:05:36.905632   15662 cni.go:84] Creating CNI manager for ""
	I0304 04:05:36.905639   15662 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:05:36.905644   15662 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:05:36.905656   15662 start_flags.go:323] config:
	{Name:addons-038000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-038000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:05:36.910102   15662 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:05:36.917493   15662 out.go:177] * Starting control plane node addons-038000 in cluster addons-038000
	I0304 04:05:36.921596   15662 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:05:36.921614   15662 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:05:36.921633   15662 cache.go:56] Caching tarball of preloaded images
	I0304 04:05:36.921707   15662 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:05:36.921713   15662 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:05:36.921957   15662 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/addons-038000/config.json ...
	I0304 04:05:36.921969   15662 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/addons-038000/config.json: {Name:mk33aba79a1a6c2cbe71bad4abb80e432e35f27e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:05:36.922199   15662 start.go:365] acquiring machines lock for addons-038000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:05:36.922348   15662 start.go:369] acquired machines lock for "addons-038000" in 143.667µs
	I0304 04:05:36.922359   15662 start.go:93] Provisioning new machine with config: &{Name:addons-038000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:addons-038000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:05:36.922398   15662 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:05:36.927521   15662 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0304 04:05:36.946686   15662 start.go:159] libmachine.API.Create for "addons-038000" (driver="qemu2")
	I0304 04:05:36.946719   15662 client.go:168] LocalClient.Create starting
	I0304 04:05:36.946893   15662 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:05:37.170988   15662 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:05:37.231711   15662 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:05:37.967973   15662 main.go:141] libmachine: Creating SSH key...
	I0304 04:05:38.043013   15662 main.go:141] libmachine: Creating Disk image...
	I0304 04:05:38.043019   15662 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:05:38.043210   15662 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/disk.qcow2
	I0304 04:05:38.055788   15662 main.go:141] libmachine: STDOUT: 
	I0304 04:05:38.055808   15662 main.go:141] libmachine: STDERR: 
	I0304 04:05:38.055865   15662 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/disk.qcow2 +20000M
	I0304 04:05:38.066466   15662 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:05:38.066498   15662 main.go:141] libmachine: STDERR: 
	I0304 04:05:38.066513   15662 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/disk.qcow2
	I0304 04:05:38.066518   15662 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:05:38.066551   15662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:74:d7:7f:6b:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/disk.qcow2
	I0304 04:05:38.068313   15662 main.go:141] libmachine: STDOUT: 
	I0304 04:05:38.068327   15662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:05:38.068352   15662 client.go:171] LocalClient.Create took 1.121630625s
	I0304 04:05:40.070652   15662 start.go:128] duration metric: createHost completed in 3.148230292s
	I0304 04:05:40.070753   15662 start.go:83] releasing machines lock for "addons-038000", held for 3.148405458s
	W0304 04:05:40.070823   15662 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:05:40.084260   15662 out.go:177] * Deleting "addons-038000" in qemu2 ...
	W0304 04:05:40.116131   15662 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:05:40.116159   15662 start.go:709] Will try again in 5 seconds ...
	I0304 04:05:45.117223   15662 start.go:365] acquiring machines lock for addons-038000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:05:45.117691   15662 start.go:369] acquired machines lock for "addons-038000" in 344.375µs
	I0304 04:05:45.117808   15662 start.go:93] Provisioning new machine with config: &{Name:addons-038000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:addons-038000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:05:45.118179   15662 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:05:45.128863   15662 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0304 04:05:45.177123   15662 start.go:159] libmachine.API.Create for "addons-038000" (driver="qemu2")
	I0304 04:05:45.177165   15662 client.go:168] LocalClient.Create starting
	I0304 04:05:45.177278   15662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:05:45.177345   15662 main.go:141] libmachine: Decoding PEM data...
	I0304 04:05:45.177365   15662 main.go:141] libmachine: Parsing certificate...
	I0304 04:05:45.177449   15662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:05:45.177492   15662 main.go:141] libmachine: Decoding PEM data...
	I0304 04:05:45.177505   15662 main.go:141] libmachine: Parsing certificate...
	I0304 04:05:45.178025   15662 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:05:45.332286   15662 main.go:141] libmachine: Creating SSH key...
	I0304 04:05:45.536947   15662 main.go:141] libmachine: Creating Disk image...
	I0304 04:05:45.536954   15662 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:05:45.537141   15662 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/disk.qcow2
	I0304 04:05:45.549984   15662 main.go:141] libmachine: STDOUT: 
	I0304 04:05:45.550012   15662 main.go:141] libmachine: STDERR: 
	I0304 04:05:45.550072   15662 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/disk.qcow2 +20000M
	I0304 04:05:45.561129   15662 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:05:45.561152   15662 main.go:141] libmachine: STDERR: 
	I0304 04:05:45.561169   15662 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/disk.qcow2
	I0304 04:05:45.561171   15662 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:05:45.561208   15662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:4d:e3:bc:1e:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/addons-038000/disk.qcow2
	I0304 04:05:45.562912   15662 main.go:141] libmachine: STDOUT: 
	I0304 04:05:45.562971   15662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:05:45.562985   15662 client.go:171] LocalClient.Create took 385.816833ms
	I0304 04:05:47.563979   15662 start.go:128] duration metric: createHost completed in 2.445731333s
	I0304 04:05:47.564085   15662 start.go:83] releasing machines lock for "addons-038000", held for 2.446379041s
	W0304 04:05:47.564435   15662 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-038000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-038000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:05:47.574058   15662 out.go:177] 
	W0304 04:05:47.580955   15662 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:05:47.580984   15662 out.go:239] * 
	* 
	W0304 04:05:47.583458   15662 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:05:47.595994   15662 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-038000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.80s)

                                                
                                    
x
+
TestCertOptions (10.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-861000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-861000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.7374465s)

                                                
                                                
-- stdout --
	* [cert-options-861000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-861000 in cluster cert-options-861000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-861000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-861000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-861000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-861000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-861000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (82.714375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-861000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-861000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-861000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-861000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-861000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (44.945625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-861000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-861000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-861000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-04 04:14:15.906038 -0800 PST m=+602.501702626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-861000 -n cert-options-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-861000 -n cert-options-861000: exit status 7 (32.70675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-861000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-861000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-861000
--- FAIL: TestCertOptions (10.04s)

                                                
                                    
x
+
TestCertExpiration (195.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-323000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-323000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.897975375s)

                                                
                                                
-- stdout --
	* [cert-expiration-323000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-323000 in cluster cert-expiration-323000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-323000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-323000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-323000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-323000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-323000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.199570333s)

                                                
                                                
-- stdout --
	* [cert-expiration-323000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-323000 in cluster cert-expiration-323000
	* Restarting existing qemu2 VM for "cert-expiration-323000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-323000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-323000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-323000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-323000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-323000 in cluster cert-expiration-323000
	* Restarting existing qemu2 VM for "cert-expiration-323000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-323000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-323000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-04 04:17:15.966479 -0800 PST m=+782.563211501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-323000 -n cert-expiration-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-323000 -n cert-expiration-323000: exit status 7 (69.427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-323000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-323000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-323000
--- FAIL: TestCertExpiration (195.28s)

                                                
                                    
x
+
TestDockerFlags (10.1s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-169000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-169000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.83262625s)

                                                
                                                
-- stdout --
	* [docker-flags-169000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-169000 in cluster docker-flags-169000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-169000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:13:55.937568   17047 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:13:55.937706   17047 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:13:55.937710   17047 out.go:304] Setting ErrFile to fd 2...
	I0304 04:13:55.937713   17047 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:13:55.937844   17047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:13:55.939065   17047 out.go:298] Setting JSON to false
	I0304 04:13:55.955268   17047 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9807,"bootTime":1709544628,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:13:55.955332   17047 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:13:55.962214   17047 out.go:177] * [docker-flags-169000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:13:55.970229   17047 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:13:55.975119   17047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:13:55.970284   17047 notify.go:220] Checking for updates...
	I0304 04:13:55.982059   17047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:13:55.985135   17047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:13:55.988217   17047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:13:55.989605   17047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:13:55.993547   17047 config.go:182] Loaded profile config "force-systemd-flag-322000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:13:55.993620   17047 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:13:55.993673   17047 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:13:55.998123   17047 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:13:56.004153   17047 start.go:299] selected driver: qemu2
	I0304 04:13:56.004157   17047 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:13:56.004162   17047 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:13:56.006413   17047 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:13:56.009142   17047 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:13:56.012284   17047 start_flags.go:926] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0304 04:13:56.012335   17047 cni.go:84] Creating CNI manager for ""
	I0304 04:13:56.012344   17047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:13:56.012348   17047 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:13:56.012361   17047 start_flags.go:323] config:
	{Name:docker-flags-169000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-169000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:13:56.017056   17047 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:13:56.025176   17047 out.go:177] * Starting control plane node docker-flags-169000 in cluster docker-flags-169000
	I0304 04:13:56.029154   17047 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:13:56.029171   17047 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:13:56.029185   17047 cache.go:56] Caching tarball of preloaded images
	I0304 04:13:56.029245   17047 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:13:56.029251   17047 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:13:56.029343   17047 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/docker-flags-169000/config.json ...
	I0304 04:13:56.029354   17047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/docker-flags-169000/config.json: {Name:mk7933924db175e12f40a7685f2c6fac606bd355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:13:56.029566   17047 start.go:365] acquiring machines lock for docker-flags-169000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:13:56.029601   17047 start.go:369] acquired machines lock for "docker-flags-169000" in 27.041µs
	I0304 04:13:56.029612   17047 start.go:93] Provisioning new machine with config: &{Name:docker-flags-169000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-169000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:13:56.029650   17047 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:13:56.034123   17047 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0304 04:13:56.052177   17047 start.go:159] libmachine.API.Create for "docker-flags-169000" (driver="qemu2")
	I0304 04:13:56.052225   17047 client.go:168] LocalClient.Create starting
	I0304 04:13:56.052282   17047 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:13:56.052311   17047 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:56.052319   17047 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:56.052368   17047 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:13:56.052390   17047 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:56.052397   17047 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:56.052748   17047 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:13:56.210626   17047 main.go:141] libmachine: Creating SSH key...
	I0304 04:13:56.262349   17047 main.go:141] libmachine: Creating Disk image...
	I0304 04:13:56.262354   17047 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:13:56.262525   17047 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/disk.qcow2
	I0304 04:13:56.274938   17047 main.go:141] libmachine: STDOUT: 
	I0304 04:13:56.274957   17047 main.go:141] libmachine: STDERR: 
	I0304 04:13:56.275009   17047 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/disk.qcow2 +20000M
	I0304 04:13:56.285503   17047 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:13:56.285523   17047 main.go:141] libmachine: STDERR: 
	I0304 04:13:56.285539   17047 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/disk.qcow2
	I0304 04:13:56.285545   17047 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:13:56.285592   17047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:85:c2:29:15:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/disk.qcow2
	I0304 04:13:56.287338   17047 main.go:141] libmachine: STDOUT: 
	I0304 04:13:56.287353   17047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:13:56.287372   17047 client.go:171] LocalClient.Create took 235.143625ms
	I0304 04:13:58.289621   17047 start.go:128] duration metric: createHost completed in 2.259952583s
	I0304 04:13:58.289685   17047 start.go:83] releasing machines lock for "docker-flags-169000", held for 2.260087917s
	W0304 04:13:58.289749   17047 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:13:58.309943   17047 out.go:177] * Deleting "docker-flags-169000" in qemu2 ...
	W0304 04:13:58.328509   17047 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:13:58.328533   17047 start.go:709] Will try again in 5 seconds ...
	I0304 04:14:03.330694   17047 start.go:365] acquiring machines lock for docker-flags-169000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:14:03.331104   17047 start.go:369] acquired machines lock for "docker-flags-169000" in 299.042µs
	I0304 04:14:03.331255   17047 start.go:93] Provisioning new machine with config: &{Name:docker-flags-169000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-169000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:14:03.331556   17047 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:14:03.340970   17047 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0304 04:14:03.391235   17047 start.go:159] libmachine.API.Create for "docker-flags-169000" (driver="qemu2")
	I0304 04:14:03.391286   17047 client.go:168] LocalClient.Create starting
	I0304 04:14:03.391412   17047 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:14:03.391481   17047 main.go:141] libmachine: Decoding PEM data...
	I0304 04:14:03.391499   17047 main.go:141] libmachine: Parsing certificate...
	I0304 04:14:03.391563   17047 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:14:03.391605   17047 main.go:141] libmachine: Decoding PEM data...
	I0304 04:14:03.391619   17047 main.go:141] libmachine: Parsing certificate...
	I0304 04:14:03.392484   17047 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:14:03.554533   17047 main.go:141] libmachine: Creating SSH key...
	I0304 04:14:03.669220   17047 main.go:141] libmachine: Creating Disk image...
	I0304 04:14:03.669225   17047 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:14:03.669421   17047 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/disk.qcow2
	I0304 04:14:03.681711   17047 main.go:141] libmachine: STDOUT: 
	I0304 04:14:03.681791   17047 main.go:141] libmachine: STDERR: 
	I0304 04:14:03.681840   17047 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/disk.qcow2 +20000M
	I0304 04:14:03.692417   17047 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:14:03.692436   17047 main.go:141] libmachine: STDERR: 
	I0304 04:14:03.692450   17047 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/disk.qcow2
	I0304 04:14:03.692458   17047 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:14:03.692491   17047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:5c:22:fa:09:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/docker-flags-169000/disk.qcow2
	I0304 04:14:03.694253   17047 main.go:141] libmachine: STDOUT: 
	I0304 04:14:03.694273   17047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:14:03.694287   17047 client.go:171] LocalClient.Create took 302.995125ms
	I0304 04:14:05.696512   17047 start.go:128] duration metric: createHost completed in 2.364879208s
	I0304 04:14:05.696580   17047 start.go:83] releasing machines lock for "docker-flags-169000", held for 2.365466166s
	W0304 04:14:05.697047   17047 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-169000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-169000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:14:05.710183   17047 out.go:177] 
	W0304 04:14:05.714115   17047 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:14:05.714147   17047 out.go:239] * 
	* 
	W0304 04:14:05.716906   17047 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:14:05.725928   17047 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-169000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-169000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-169000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (80.760625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-169000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-169000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-169000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-169000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-169000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-169000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (45.747792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-169000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-169000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-169000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-169000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-04 04:14:05.869882 -0800 PST m=+592.465487251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-169000 -n docker-flags-169000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-169000 -n docker-flags-169000: exit status 7 (34.99225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-169000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-169000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-169000
--- FAIL: TestDockerFlags (10.10s)

                                                
                                    
x
+
TestForceSystemdFlag (10.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-322000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-322000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.850071167s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-322000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-322000 in cluster force-systemd-flag-322000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:13:50.825182   17025 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:13:50.825315   17025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:13:50.825319   17025 out.go:304] Setting ErrFile to fd 2...
	I0304 04:13:50.825321   17025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:13:50.825446   17025 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:13:50.826449   17025 out.go:298] Setting JSON to false
	I0304 04:13:50.842462   17025 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9802,"bootTime":1709544628,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:13:50.842522   17025 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:13:50.848362   17025 out.go:177] * [force-systemd-flag-322000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:13:50.855368   17025 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:13:50.855420   17025 notify.go:220] Checking for updates...
	I0304 04:13:50.862301   17025 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:13:50.865371   17025 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:13:50.868397   17025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:13:50.871381   17025 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:13:50.874362   17025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:13:50.877772   17025 config.go:182] Loaded profile config "force-systemd-env-315000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:13:50.877839   17025 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:13:50.877887   17025 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:13:50.882353   17025 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:13:50.889365   17025 start.go:299] selected driver: qemu2
	I0304 04:13:50.889373   17025 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:13:50.889378   17025 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:13:50.891594   17025 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:13:50.894310   17025 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:13:50.897399   17025 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0304 04:13:50.897427   17025 cni.go:84] Creating CNI manager for ""
	I0304 04:13:50.897435   17025 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:13:50.897441   17025 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:13:50.897447   17025 start_flags.go:323] config:
	{Name:force-systemd-flag-322000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:13:50.901865   17025 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:13:50.910396   17025 out.go:177] * Starting control plane node force-systemd-flag-322000 in cluster force-systemd-flag-322000
	I0304 04:13:50.914335   17025 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:13:50.914348   17025 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:13:50.914358   17025 cache.go:56] Caching tarball of preloaded images
	I0304 04:13:50.914406   17025 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:13:50.914411   17025 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:13:50.914463   17025 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/force-systemd-flag-322000/config.json ...
	I0304 04:13:50.914473   17025 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/force-systemd-flag-322000/config.json: {Name:mk3cef7ea58002e88d82826fc6f0b87885df7c77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:13:50.914676   17025 start.go:365] acquiring machines lock for force-systemd-flag-322000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:13:50.914710   17025 start.go:369] acquired machines lock for "force-systemd-flag-322000" in 25.541µs
	I0304 04:13:50.914721   17025 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:13:50.914755   17025 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:13:50.922383   17025 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0304 04:13:50.939484   17025 start.go:159] libmachine.API.Create for "force-systemd-flag-322000" (driver="qemu2")
	I0304 04:13:50.939514   17025 client.go:168] LocalClient.Create starting
	I0304 04:13:50.939575   17025 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:13:50.939607   17025 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:50.939620   17025 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:50.939677   17025 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:13:50.939703   17025 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:50.939712   17025 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:50.940087   17025 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:13:51.082344   17025 main.go:141] libmachine: Creating SSH key...
	I0304 04:13:51.163022   17025 main.go:141] libmachine: Creating Disk image...
	I0304 04:13:51.163031   17025 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:13:51.163208   17025 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/disk.qcow2
	I0304 04:13:51.175430   17025 main.go:141] libmachine: STDOUT: 
	I0304 04:13:51.175453   17025 main.go:141] libmachine: STDERR: 
	I0304 04:13:51.175507   17025 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/disk.qcow2 +20000M
	I0304 04:13:51.186497   17025 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:13:51.186519   17025 main.go:141] libmachine: STDERR: 
	I0304 04:13:51.186535   17025 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/disk.qcow2
	I0304 04:13:51.186539   17025 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:13:51.186566   17025 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:13:33:2d:30:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/disk.qcow2
	I0304 04:13:51.188260   17025 main.go:141] libmachine: STDOUT: 
	I0304 04:13:51.188276   17025 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:13:51.188292   17025 client.go:171] LocalClient.Create took 248.774625ms
	I0304 04:13:53.190567   17025 start.go:128] duration metric: createHost completed in 2.275794084s
	I0304 04:13:53.190651   17025 start.go:83] releasing machines lock for "force-systemd-flag-322000", held for 2.275945958s
	W0304 04:13:53.190711   17025 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:13:53.215075   17025 out.go:177] * Deleting "force-systemd-flag-322000" in qemu2 ...
	W0304 04:13:53.236461   17025 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:13:53.236484   17025 start.go:709] Will try again in 5 seconds ...
	I0304 04:13:58.238647   17025 start.go:365] acquiring machines lock for force-systemd-flag-322000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:13:58.289793   17025 start.go:369] acquired machines lock for "force-systemd-flag-322000" in 50.999833ms
	I0304 04:13:58.289962   17025 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:13:58.290238   17025 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:13:58.298986   17025 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0304 04:13:58.346968   17025 start.go:159] libmachine.API.Create for "force-systemd-flag-322000" (driver="qemu2")
	I0304 04:13:58.347013   17025 client.go:168] LocalClient.Create starting
	I0304 04:13:58.347139   17025 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:13:58.347205   17025 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:58.347221   17025 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:58.347287   17025 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:13:58.347328   17025 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:58.347345   17025 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:58.348006   17025 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:13:58.499199   17025 main.go:141] libmachine: Creating SSH key...
	I0304 04:13:58.562024   17025 main.go:141] libmachine: Creating Disk image...
	I0304 04:13:58.562034   17025 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:13:58.562245   17025 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/disk.qcow2
	I0304 04:13:58.574068   17025 main.go:141] libmachine: STDOUT: 
	I0304 04:13:58.574088   17025 main.go:141] libmachine: STDERR: 
	I0304 04:13:58.574133   17025 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/disk.qcow2 +20000M
	I0304 04:13:58.584828   17025 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:13:58.584845   17025 main.go:141] libmachine: STDERR: 
	I0304 04:13:58.584857   17025 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/disk.qcow2
	I0304 04:13:58.584870   17025 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:13:58.584898   17025 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:90:fe:60:52:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-flag-322000/disk.qcow2
	I0304 04:13:58.586578   17025 main.go:141] libmachine: STDOUT: 
	I0304 04:13:58.586593   17025 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:13:58.586607   17025 client.go:171] LocalClient.Create took 239.590959ms
	I0304 04:14:00.588824   17025 start.go:128] duration metric: createHost completed in 2.298559042s
	I0304 04:14:00.588933   17025 start.go:83] releasing machines lock for "force-systemd-flag-322000", held for 2.299128959s
	W0304 04:14:00.589312   17025 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:14:00.603075   17025 out.go:177] 
	W0304 04:14:00.617269   17025 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:14:00.617337   17025 out.go:239] * 
	* 
	W0304 04:14:00.619181   17025 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:14:00.628953   17025 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-322000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-322000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-322000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (80.252625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-322000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-322000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-04 04:14:00.728511 -0800 PST m=+587.324085585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-322000 -n force-systemd-flag-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-322000 -n force-systemd-flag-322000: exit status 7 (36.168167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-322000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-322000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-322000
--- FAIL: TestForceSystemdFlag (10.07s)

                                                
                                    
x
+
TestForceSystemdEnv (12.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-315000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-315000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.980305208s)

                                                
                                                
-- stdout --
	* [force-systemd-env-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-315000 in cluster force-systemd-env-315000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:13:43.734104   16991 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:13:43.734235   16991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:13:43.734238   16991 out.go:304] Setting ErrFile to fd 2...
	I0304 04:13:43.734240   16991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:13:43.734367   16991 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:13:43.735384   16991 out.go:298] Setting JSON to false
	I0304 04:13:43.752423   16991 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9795,"bootTime":1709544628,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:13:43.752492   16991 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:13:43.758218   16991 out.go:177] * [force-systemd-env-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:13:43.765443   16991 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:13:43.768372   16991 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:13:43.765476   16991 notify.go:220] Checking for updates...
	I0304 04:13:43.771413   16991 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:13:43.774410   16991 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:13:43.777460   16991 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:13:43.780421   16991 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0304 04:13:43.783757   16991 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:13:43.783807   16991 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:13:43.787399   16991 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:13:43.794450   16991 start.go:299] selected driver: qemu2
	I0304 04:13:43.794458   16991 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:13:43.794464   16991 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:13:43.797034   16991 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:13:43.798623   16991 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:13:43.801449   16991 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0304 04:13:43.801477   16991 cni.go:84] Creating CNI manager for ""
	I0304 04:13:43.801484   16991 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:13:43.801491   16991 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:13:43.801497   16991 start_flags.go:323] config:
	{Name:force-systemd-env-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:13:43.806274   16991 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:13:43.814379   16991 out.go:177] * Starting control plane node force-systemd-env-315000 in cluster force-systemd-env-315000
	I0304 04:13:43.818326   16991 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:13:43.818343   16991 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:13:43.818349   16991 cache.go:56] Caching tarball of preloaded images
	I0304 04:13:43.818402   16991 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:13:43.818408   16991 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:13:43.818468   16991 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/force-systemd-env-315000/config.json ...
	I0304 04:13:43.818479   16991 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/force-systemd-env-315000/config.json: {Name:mkebc5b96e136bea5765d6317cceb1ea46d89cb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:13:43.818701   16991 start.go:365] acquiring machines lock for force-systemd-env-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:13:43.818736   16991 start.go:369] acquired machines lock for "force-systemd-env-315000" in 26.375µs
	I0304 04:13:43.818746   16991 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:13:43.818781   16991 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:13:43.827446   16991 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0304 04:13:43.844283   16991 start.go:159] libmachine.API.Create for "force-systemd-env-315000" (driver="qemu2")
	I0304 04:13:43.844307   16991 client.go:168] LocalClient.Create starting
	I0304 04:13:43.844373   16991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:13:43.844401   16991 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:43.844411   16991 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:43.844453   16991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:13:43.844474   16991 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:43.844481   16991 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:43.844835   16991 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:13:43.988976   16991 main.go:141] libmachine: Creating SSH key...
	I0304 04:13:44.085610   16991 main.go:141] libmachine: Creating Disk image...
	I0304 04:13:44.085617   16991 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:13:44.085828   16991 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/disk.qcow2
	I0304 04:13:44.098002   16991 main.go:141] libmachine: STDOUT: 
	I0304 04:13:44.098019   16991 main.go:141] libmachine: STDERR: 
	I0304 04:13:44.098071   16991 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/disk.qcow2 +20000M
	I0304 04:13:44.108783   16991 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:13:44.108803   16991 main.go:141] libmachine: STDERR: 
	I0304 04:13:44.108820   16991 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/disk.qcow2
	I0304 04:13:44.108846   16991 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:13:44.108882   16991 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:f7:85:e0:6b:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/disk.qcow2
	I0304 04:13:44.110630   16991 main.go:141] libmachine: STDOUT: 
	I0304 04:13:44.110647   16991 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:13:44.110665   16991 client.go:171] LocalClient.Create took 266.353584ms
	I0304 04:13:46.111298   16991 start.go:128] duration metric: createHost completed in 2.292494042s
	I0304 04:13:46.111491   16991 start.go:83] releasing machines lock for "force-systemd-env-315000", held for 2.292709791s
	W0304 04:13:46.111545   16991 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:13:46.128533   16991 out.go:177] * Deleting "force-systemd-env-315000" in qemu2 ...
	W0304 04:13:46.159507   16991 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:13:46.159543   16991 start.go:709] Will try again in 5 seconds ...
	I0304 04:13:51.161607   16991 start.go:365] acquiring machines lock for force-systemd-env-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:13:53.190786   16991 start.go:369] acquired machines lock for "force-systemd-env-315000" in 2.029154208s
	I0304 04:13:53.190949   16991 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:13:53.191376   16991 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:13:53.205047   16991 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0304 04:13:53.254166   16991 start.go:159] libmachine.API.Create for "force-systemd-env-315000" (driver="qemu2")
	I0304 04:13:53.254224   16991 client.go:168] LocalClient.Create starting
	I0304 04:13:53.254340   16991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:13:53.254399   16991 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:53.254416   16991 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:53.254475   16991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:13:53.254515   16991 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:53.254541   16991 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:53.255145   16991 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:13:53.455527   16991 main.go:141] libmachine: Creating SSH key...
	I0304 04:13:53.588211   16991 main.go:141] libmachine: Creating Disk image...
	I0304 04:13:53.588224   16991 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:13:53.588395   16991 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/disk.qcow2
	I0304 04:13:53.609596   16991 main.go:141] libmachine: STDOUT: 
	I0304 04:13:53.609619   16991 main.go:141] libmachine: STDERR: 
	I0304 04:13:53.609673   16991 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/disk.qcow2 +20000M
	I0304 04:13:53.620406   16991 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:13:53.620423   16991 main.go:141] libmachine: STDERR: 
	I0304 04:13:53.620436   16991 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/disk.qcow2
	I0304 04:13:53.620446   16991 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:13:53.620494   16991 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:33:84:12:67:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/force-systemd-env-315000/disk.qcow2
	I0304 04:13:53.622177   16991 main.go:141] libmachine: STDOUT: 
	I0304 04:13:53.622193   16991 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:13:53.622206   16991 client.go:171] LocalClient.Create took 367.979041ms
	I0304 04:13:55.624412   16991 start.go:128] duration metric: createHost completed in 2.432996375s
	I0304 04:13:55.624481   16991 start.go:83] releasing machines lock for "force-systemd-env-315000", held for 2.433671959s
	W0304 04:13:55.624853   16991 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:13:55.645572   16991 out.go:177] 
	W0304 04:13:55.654525   16991 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:13:55.654556   16991 out.go:239] * 
	* 
	W0304 04:13:55.657309   16991 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:13:55.668467   16991 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-315000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-315000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-315000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (83.548291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-315000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-315000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-04 04:13:55.769413 -0800 PST m=+582.364957876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-315000 -n force-systemd-env-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-315000 -n force-systemd-env-315000: exit status 7 (35.224416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-315000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-315000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-315000
--- FAIL: TestForceSystemdEnv (12.20s)

                                                
                                    
x
+
TestErrorSpam/setup (10.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-336000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-336000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 --driver=qemu2 : exit status 80 (10.08929225s)

                                                
                                                
-- stdout --
	* [nospam-336000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node nospam-336000 in cluster nospam-336000
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-336000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-336000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-336000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18284
- KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting control plane node nospam-336000 in cluster nospam-336000
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-336000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (10.09s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-682000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-682000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.802900125s)

                                                
                                                
-- stdout --
	* [functional-682000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node functional-682000 in cluster functional-682000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-682000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52429 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52429 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52429 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-682000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-682000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-682000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18284
- KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting control plane node functional-682000 in cluster functional-682000
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-682000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:52429 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:52429 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:52429 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-682000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (71.355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.88s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-682000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-682000 --alsologtostderr -v=8: exit status 80 (5.194456709s)

                                                
                                                
-- stdout --
	* [functional-682000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node functional-682000 in cluster functional-682000
	* Restarting existing qemu2 VM for "functional-682000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-682000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:06:08.799967   15774 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:06:08.800112   15774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:06:08.800115   15774 out.go:304] Setting ErrFile to fd 2...
	I0304 04:06:08.800117   15774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:06:08.800246   15774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:06:08.801215   15774 out.go:298] Setting JSON to false
	I0304 04:06:08.817431   15774 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9340,"bootTime":1709544628,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:06:08.817486   15774 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:06:08.822746   15774 out.go:177] * [functional-682000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:06:08.830631   15774 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:06:08.830678   15774 notify.go:220] Checking for updates...
	I0304 04:06:08.834734   15774 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:06:08.838745   15774 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:06:08.841728   15774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:06:08.844717   15774 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:06:08.847719   15774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:06:08.851143   15774 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:06:08.851202   15774 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:06:08.855699   15774 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:06:08.862687   15774 start.go:299] selected driver: qemu2
	I0304 04:06:08.862694   15774 start.go:903] validating driver "qemu2" against &{Name:functional-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-682000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:06:08.862776   15774 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:06:08.865099   15774 cni.go:84] Creating CNI manager for ""
	I0304 04:06:08.865118   15774 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:06:08.865132   15774 start_flags.go:323] config:
	{Name:functional-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-682000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:06:08.869769   15774 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:06:08.877548   15774 out.go:177] * Starting control plane node functional-682000 in cluster functional-682000
	I0304 04:06:08.881725   15774 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:06:08.881744   15774 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:06:08.881755   15774 cache.go:56] Caching tarball of preloaded images
	I0304 04:06:08.881813   15774 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:06:08.881818   15774 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:06:08.881887   15774 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/functional-682000/config.json ...
	I0304 04:06:08.882355   15774 start.go:365] acquiring machines lock for functional-682000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:06:08.882381   15774 start.go:369] acquired machines lock for "functional-682000" in 20.167µs
	I0304 04:06:08.882388   15774 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:06:08.882393   15774 fix.go:54] fixHost starting: 
	I0304 04:06:08.882512   15774 fix.go:102] recreateIfNeeded on functional-682000: state=Stopped err=<nil>
	W0304 04:06:08.882521   15774 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:06:08.890651   15774 out.go:177] * Restarting existing qemu2 VM for "functional-682000" ...
	I0304 04:06:08.894717   15774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:4e:aa:59:82:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/disk.qcow2
	I0304 04:06:08.896783   15774 main.go:141] libmachine: STDOUT: 
	I0304 04:06:08.896806   15774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:06:08.896836   15774 fix.go:56] fixHost completed within 14.441834ms
	I0304 04:06:08.896840   15774 start.go:83] releasing machines lock for "functional-682000", held for 14.45525ms
	W0304 04:06:08.896848   15774 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:06:08.896883   15774 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:06:08.896888   15774 start.go:709] Will try again in 5 seconds ...
	I0304 04:06:13.897903   15774 start.go:365] acquiring machines lock for functional-682000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:06:13.898308   15774 start.go:369] acquired machines lock for "functional-682000" in 279.792µs
	I0304 04:06:13.898455   15774 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:06:13.898477   15774 fix.go:54] fixHost starting: 
	I0304 04:06:13.899215   15774 fix.go:102] recreateIfNeeded on functional-682000: state=Stopped err=<nil>
	W0304 04:06:13.899243   15774 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:06:13.908626   15774 out.go:177] * Restarting existing qemu2 VM for "functional-682000" ...
	I0304 04:06:13.913949   15774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:4e:aa:59:82:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/disk.qcow2
	I0304 04:06:13.924333   15774 main.go:141] libmachine: STDOUT: 
	I0304 04:06:13.924420   15774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:06:13.924515   15774 fix.go:56] fixHost completed within 26.038959ms
	I0304 04:06:13.924542   15774 start.go:83] releasing machines lock for "functional-682000", held for 26.208542ms
	W0304 04:06:13.924897   15774 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-682000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-682000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:06:13.931655   15774 out.go:177] 
	W0304 04:06:13.935751   15774 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:06:13.935784   15774 out.go:239] * 
	* 
	W0304 04:06:13.938525   15774 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:06:13.947647   15774 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-682000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.196292833s for "functional-682000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (69.222041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.915583ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-682000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (32.364625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-682000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-682000 get po -A: exit status 1 (27.158291ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-682000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-682000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-682000\n"*: args "kubectl --context functional-682000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-682000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (32.415917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh sudo crictl images: exit status 89 (42.883542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-682000 ssh sudo crictl images" ssh exit status 89
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 89 (44.955375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-682000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 89
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 89 (42.773084ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 89 (44.737541ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-682000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 89
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 kubectl -- --context functional-682000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 kubectl -- --context functional-682000 get pods: exit status 1 (513.828917ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-682000
	* no server found for cluster "functional-682000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-682000 kubectl -- --context functional-682000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (34.373792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.71s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-682000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-682000 get pods: exit status 1 (674.90975ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-682000
	* no server found for cluster "functional-682000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-682000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (31.780583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.71s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-682000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-682000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.193159125s)

                                                
                                                
-- stdout --
	* [functional-682000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node functional-682000 in cluster functional-682000
	* Restarting existing qemu2 VM for "functional-682000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-682000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-682000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-682000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.193777042s for "functional-682000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (69.429042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-682000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-682000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (30.707125ms)

                                                
                                                
** stderr ** 
	error: context "functional-682000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-682000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (32.722625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 logs: exit status 89 (77.7705ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST |                     |
	|         | -p download-only-150000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:04 PST | 04 Mar 24 04:04 PST |
	| delete  | -p download-only-150000                                                  | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST | 04 Mar 24 04:04 PST |
	| start   | -o=json --download-only                                                  | download-only-405000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST |                     |
	|         | -p download-only-405000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	| delete  | -p download-only-405000                                                  | download-only-405000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	| start   | -o=json --download-only                                                  | download-only-719000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | -p download-only-719000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	| delete  | -p download-only-719000                                                  | download-only-719000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	| delete  | -p download-only-150000                                                  | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	| delete  | -p download-only-405000                                                  | download-only-405000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	| delete  | -p download-only-719000                                                  | download-only-719000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	| start   | --download-only -p                                                       | binary-mirror-685000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | binary-mirror-685000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:52418                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-685000                                                  | binary-mirror-685000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	| addons  | enable dashboard -p                                                      | addons-038000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | addons-038000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-038000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | addons-038000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-038000 --wait=true                                             | addons-038000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-038000                                                         | addons-038000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	| start   | -p nospam-336000 -n=1 --memory=2250 --wait=false                         | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-336000                                                         | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	| start   | -p functional-682000                                                     | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-682000                                                     | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-682000 cache add                                              | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-682000 cache add                                              | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-682000 cache add                                              | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-682000 cache add                                              | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
	|         | minikube-local-cache-test:functional-682000                              |                      |         |         |                     |                     |
	| cache   | functional-682000 cache delete                                           | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
	|         | minikube-local-cache-test:functional-682000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
	| ssh     | functional-682000 ssh sudo                                               | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-682000                                                        | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-682000 ssh                                                    | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-682000 cache reload                                           | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
	| ssh     | functional-682000 ssh                                                    | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-682000 kubectl --                                             | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
	|         | --context functional-682000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-682000                                                     | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/04 04:06:23
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0304 04:06:23.048718   15858 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:06:23.048865   15858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:06:23.048867   15858 out.go:304] Setting ErrFile to fd 2...
	I0304 04:06:23.048869   15858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:06:23.048985   15858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:06:23.050000   15858 out.go:298] Setting JSON to false
	I0304 04:06:23.066185   15858 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9355,"bootTime":1709544628,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:06:23.066246   15858 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:06:23.070752   15858 out.go:177] * [functional-682000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:06:23.079625   15858 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:06:23.083657   15858 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:06:23.079695   15858 notify.go:220] Checking for updates...
	I0304 04:06:23.090588   15858 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:06:23.093650   15858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:06:23.096615   15858 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:06:23.099668   15858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:06:23.103013   15858 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:06:23.103060   15858 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:06:23.106602   15858 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:06:23.113667   15858 start.go:299] selected driver: qemu2
	I0304 04:06:23.113671   15858 start.go:903] validating driver "qemu2" against &{Name:functional-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-682000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:06:23.113743   15858 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:06:23.115959   15858 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:06:23.116000   15858 cni.go:84] Creating CNI manager for ""
	I0304 04:06:23.116009   15858 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:06:23.116020   15858 start_flags.go:323] config:
	{Name:functional-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-682000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:06:23.120411   15858 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:06:23.128636   15858 out.go:177] * Starting control plane node functional-682000 in cluster functional-682000
	I0304 04:06:23.132654   15858 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:06:23.132669   15858 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:06:23.132679   15858 cache.go:56] Caching tarball of preloaded images
	I0304 04:06:23.132752   15858 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:06:23.132756   15858 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:06:23.132843   15858 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/functional-682000/config.json ...
	I0304 04:06:23.133305   15858 start.go:365] acquiring machines lock for functional-682000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:06:23.133340   15858 start.go:369] acquired machines lock for "functional-682000" in 31.583µs
	I0304 04:06:23.133347   15858 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:06:23.133349   15858 fix.go:54] fixHost starting: 
	I0304 04:06:23.133464   15858 fix.go:102] recreateIfNeeded on functional-682000: state=Stopped err=<nil>
	W0304 04:06:23.133470   15858 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:06:23.137658   15858 out.go:177] * Restarting existing qemu2 VM for "functional-682000" ...
	I0304 04:06:23.148656   15858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:4e:aa:59:82:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/disk.qcow2
	I0304 04:06:23.150715   15858 main.go:141] libmachine: STDOUT: 
	I0304 04:06:23.150731   15858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:06:23.150758   15858 fix.go:56] fixHost completed within 17.407875ms
	I0304 04:06:23.150761   15858 start.go:83] releasing machines lock for "functional-682000", held for 17.418042ms
	W0304 04:06:23.150768   15858 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:06:23.150807   15858 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:06:23.150812   15858 start.go:709] Will try again in 5 seconds ...
	I0304 04:06:28.151569   15858 start.go:365] acquiring machines lock for functional-682000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:06:28.152041   15858 start.go:369] acquired machines lock for "functional-682000" in 394.5µs
	I0304 04:06:28.152173   15858 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:06:28.152186   15858 fix.go:54] fixHost starting: 
	I0304 04:06:28.152903   15858 fix.go:102] recreateIfNeeded on functional-682000: state=Stopped err=<nil>
	W0304 04:06:28.152924   15858 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:06:28.157336   15858 out.go:177] * Restarting existing qemu2 VM for "functional-682000" ...
	I0304 04:06:28.166475   15858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:4e:aa:59:82:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/disk.qcow2
	I0304 04:06:28.176502   15858 main.go:141] libmachine: STDOUT: 
	I0304 04:06:28.176587   15858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:06:28.176693   15858 fix.go:56] fixHost completed within 24.507541ms
	I0304 04:06:28.176707   15858 start.go:83] releasing machines lock for "functional-682000", held for 24.648333ms
	W0304 04:06:28.176942   15858 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-682000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:06:28.185301   15858 out.go:177] 
	W0304 04:06:28.188328   15858 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:06:28.188349   15858 out.go:239] * 
	W0304 04:06:28.190619   15858 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:06:28.198376   15858 out.go:177] 
	
	
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-682000 logs failed: exit status 89
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST |                     |
|         | -p download-only-150000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.16.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:04 PST | 04 Mar 24 04:04 PST |
| delete  | -p download-only-150000                                                  | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST | 04 Mar 24 04:04 PST |
| start   | -o=json --download-only                                                  | download-only-405000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST |                     |
|         | -p download-only-405000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| delete  | -p download-only-405000                                                  | download-only-405000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| start   | -o=json --download-only                                                  | download-only-719000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | -p download-only-719000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| delete  | -p download-only-719000                                                  | download-only-719000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| delete  | -p download-only-150000                                                  | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| delete  | -p download-only-405000                                                  | download-only-405000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| delete  | -p download-only-719000                                                  | download-only-719000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| start   | --download-only -p                                                       | binary-mirror-685000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | binary-mirror-685000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52418                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-685000                                                  | binary-mirror-685000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| addons  | enable dashboard -p                                                      | addons-038000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | addons-038000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-038000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | addons-038000                                                            |                      |         |         |                     |                     |
| start   | -p addons-038000 --wait=true                                             | addons-038000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-038000                                                         | addons-038000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| start   | -p nospam-336000 -n=1 --memory=2250 --wait=false                         | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-336000                                                         | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| start   | -p functional-682000                                                     | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-682000                                                     | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-682000 cache add                                              | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-682000 cache add                                              | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-682000 cache add                                              | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-682000 cache add                                              | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | minikube-local-cache-test:functional-682000                              |                      |         |         |                     |                     |
| cache   | functional-682000 cache delete                                           | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | minikube-local-cache-test:functional-682000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
| ssh     | functional-682000 ssh sudo                                               | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-682000                                                        | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-682000 ssh                                                    | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-682000 cache reload                                           | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
| ssh     | functional-682000 ssh                                                    | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-682000 kubectl --                                             | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | --context functional-682000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-682000                                                     | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/04 04:06:23
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0304 04:06:23.048718   15858 out.go:291] Setting OutFile to fd 1 ...
I0304 04:06:23.048865   15858 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:06:23.048867   15858 out.go:304] Setting ErrFile to fd 2...
I0304 04:06:23.048869   15858 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:06:23.048985   15858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
I0304 04:06:23.050000   15858 out.go:298] Setting JSON to false
I0304 04:06:23.066185   15858 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9355,"bootTime":1709544628,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0304 04:06:23.066246   15858 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0304 04:06:23.070752   15858 out.go:177] * [functional-682000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0304 04:06:23.079625   15858 out.go:177]   - MINIKUBE_LOCATION=18284
I0304 04:06:23.083657   15858 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
I0304 04:06:23.079695   15858 notify.go:220] Checking for updates...
I0304 04:06:23.090588   15858 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0304 04:06:23.093650   15858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0304 04:06:23.096615   15858 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
I0304 04:06:23.099668   15858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0304 04:06:23.103013   15858 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0304 04:06:23.103060   15858 driver.go:392] Setting default libvirt URI to qemu:///system
I0304 04:06:23.106602   15858 out.go:177] * Using the qemu2 driver based on existing profile
I0304 04:06:23.113667   15858 start.go:299] selected driver: qemu2
I0304 04:06:23.113671   15858 start.go:903] validating driver "qemu2" against &{Name:functional-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:functional-682000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0304 04:06:23.113743   15858 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0304 04:06:23.115959   15858 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0304 04:06:23.116000   15858 cni.go:84] Creating CNI manager for ""
I0304 04:06:23.116009   15858 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0304 04:06:23.116020   15858 start_flags.go:323] config:
{Name:functional-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-682000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0304 04:06:23.120411   15858 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0304 04:06:23.128636   15858 out.go:177] * Starting control plane node functional-682000 in cluster functional-682000
I0304 04:06:23.132654   15858 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0304 04:06:23.132669   15858 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0304 04:06:23.132679   15858 cache.go:56] Caching tarball of preloaded images
I0304 04:06:23.132752   15858 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0304 04:06:23.132756   15858 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
I0304 04:06:23.132843   15858 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/functional-682000/config.json ...
I0304 04:06:23.133305   15858 start.go:365] acquiring machines lock for functional-682000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0304 04:06:23.133340   15858 start.go:369] acquired machines lock for "functional-682000" in 31.583µs
I0304 04:06:23.133347   15858 start.go:96] Skipping create...Using existing machine configuration
I0304 04:06:23.133349   15858 fix.go:54] fixHost starting: 
I0304 04:06:23.133464   15858 fix.go:102] recreateIfNeeded on functional-682000: state=Stopped err=<nil>
W0304 04:06:23.133470   15858 fix.go:128] unexpected machine state, will restart: <nil>
I0304 04:06:23.137658   15858 out.go:177] * Restarting existing qemu2 VM for "functional-682000" ...
I0304 04:06:23.148656   15858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:4e:aa:59:82:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/disk.qcow2
I0304 04:06:23.150715   15858 main.go:141] libmachine: STDOUT: 
I0304 04:06:23.150731   15858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0304 04:06:23.150758   15858 fix.go:56] fixHost completed within 17.407875ms
I0304 04:06:23.150761   15858 start.go:83] releasing machines lock for "functional-682000", held for 17.418042ms
W0304 04:06:23.150768   15858 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0304 04:06:23.150807   15858 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0304 04:06:23.150812   15858 start.go:709] Will try again in 5 seconds ...
I0304 04:06:28.151569   15858 start.go:365] acquiring machines lock for functional-682000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0304 04:06:28.152041   15858 start.go:369] acquired machines lock for "functional-682000" in 394.5µs
I0304 04:06:28.152173   15858 start.go:96] Skipping create...Using existing machine configuration
I0304 04:06:28.152186   15858 fix.go:54] fixHost starting: 
I0304 04:06:28.152903   15858 fix.go:102] recreateIfNeeded on functional-682000: state=Stopped err=<nil>
W0304 04:06:28.152924   15858 fix.go:128] unexpected machine state, will restart: <nil>
I0304 04:06:28.157336   15858 out.go:177] * Restarting existing qemu2 VM for "functional-682000" ...
I0304 04:06:28.166475   15858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:4e:aa:59:82:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/disk.qcow2
I0304 04:06:28.176502   15858 main.go:141] libmachine: STDOUT: 
I0304 04:06:28.176587   15858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0304 04:06:28.176693   15858 fix.go:56] fixHost completed within 24.507541ms
I0304 04:06:28.176707   15858 start.go:83] releasing machines lock for "functional-682000", held for 24.648333ms
W0304 04:06:28.176942   15858 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-682000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0304 04:06:28.185301   15858 out.go:177] 
W0304 04:06:28.188328   15858 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0304 04:06:28.188349   15858 out.go:239] * 
W0304 04:06:28.190619   15858 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0304 04:06:28.198376   15858 out.go:177] 

                                                
                                                

                                                
                                                
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2645549577/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST |                     |
|         | -p download-only-150000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.16.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:04 PST | 04 Mar 24 04:04 PST |
| delete  | -p download-only-150000                                                  | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST | 04 Mar 24 04:04 PST |
| start   | -o=json --download-only                                                  | download-only-405000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST |                     |
|         | -p download-only-405000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| delete  | -p download-only-405000                                                  | download-only-405000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| start   | -o=json --download-only                                                  | download-only-719000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | -p download-only-719000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| delete  | -p download-only-719000                                                  | download-only-719000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| delete  | -p download-only-150000                                                  | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| delete  | -p download-only-405000                                                  | download-only-405000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| delete  | -p download-only-719000                                                  | download-only-719000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| start   | --download-only -p                                                       | binary-mirror-685000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | binary-mirror-685000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52418                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-685000                                                  | binary-mirror-685000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| addons  | enable dashboard -p                                                      | addons-038000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | addons-038000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-038000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | addons-038000                                                            |                      |         |         |                     |                     |
| start   | -p addons-038000 --wait=true                                             | addons-038000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-038000                                                         | addons-038000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| start   | -p nospam-336000 -n=1 --memory=2250 --wait=false                         | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-336000 --log_dir                                                  | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-336000                                                         | nospam-336000        | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
| start   | -p functional-682000                                                     | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-682000                                                     | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-682000 cache add                                              | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-682000 cache add                                              | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-682000 cache add                                              | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-682000 cache add                                              | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | minikube-local-cache-test:functional-682000                              |                      |         |         |                     |                     |
| cache   | functional-682000 cache delete                                           | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | minikube-local-cache-test:functional-682000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
| ssh     | functional-682000 ssh sudo                                               | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-682000                                                        | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-682000 ssh                                                    | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-682000 cache reload                                           | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
| ssh     | functional-682000 ssh                                                    | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:06 PST | 04 Mar 24 04:06 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-682000 kubectl --                                             | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | --context functional-682000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-682000                                                     | functional-682000    | jenkins | v1.32.0 | 04 Mar 24 04:06 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/04 04:06:23
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0304 04:06:23.048718   15858 out.go:291] Setting OutFile to fd 1 ...
I0304 04:06:23.048865   15858 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:06:23.048867   15858 out.go:304] Setting ErrFile to fd 2...
I0304 04:06:23.048869   15858 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:06:23.048985   15858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
I0304 04:06:23.050000   15858 out.go:298] Setting JSON to false
I0304 04:06:23.066185   15858 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9355,"bootTime":1709544628,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0304 04:06:23.066246   15858 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0304 04:06:23.070752   15858 out.go:177] * [functional-682000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0304 04:06:23.079625   15858 out.go:177]   - MINIKUBE_LOCATION=18284
I0304 04:06:23.083657   15858 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
I0304 04:06:23.079695   15858 notify.go:220] Checking for updates...
I0304 04:06:23.090588   15858 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0304 04:06:23.093650   15858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0304 04:06:23.096615   15858 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
I0304 04:06:23.099668   15858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0304 04:06:23.103013   15858 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0304 04:06:23.103060   15858 driver.go:392] Setting default libvirt URI to qemu:///system
I0304 04:06:23.106602   15858 out.go:177] * Using the qemu2 driver based on existing profile
I0304 04:06:23.113667   15858 start.go:299] selected driver: qemu2
I0304 04:06:23.113671   15858 start.go:903] validating driver "qemu2" against &{Name:functional-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:functional-682000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0304 04:06:23.113743   15858 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0304 04:06:23.115959   15858 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0304 04:06:23.116000   15858 cni.go:84] Creating CNI manager for ""
I0304 04:06:23.116009   15858 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0304 04:06:23.116020   15858 start_flags.go:323] config:
{Name:functional-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-682000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0304 04:06:23.120411   15858 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0304 04:06:23.128636   15858 out.go:177] * Starting control plane node functional-682000 in cluster functional-682000
I0304 04:06:23.132654   15858 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0304 04:06:23.132669   15858 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0304 04:06:23.132679   15858 cache.go:56] Caching tarball of preloaded images
I0304 04:06:23.132752   15858 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0304 04:06:23.132756   15858 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
I0304 04:06:23.132843   15858 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/functional-682000/config.json ...
I0304 04:06:23.133305   15858 start.go:365] acquiring machines lock for functional-682000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0304 04:06:23.133340   15858 start.go:369] acquired machines lock for "functional-682000" in 31.583µs
I0304 04:06:23.133347   15858 start.go:96] Skipping create...Using existing machine configuration
I0304 04:06:23.133349   15858 fix.go:54] fixHost starting: 
I0304 04:06:23.133464   15858 fix.go:102] recreateIfNeeded on functional-682000: state=Stopped err=<nil>
W0304 04:06:23.133470   15858 fix.go:128] unexpected machine state, will restart: <nil>
I0304 04:06:23.137658   15858 out.go:177] * Restarting existing qemu2 VM for "functional-682000" ...
I0304 04:06:23.148656   15858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:4e:aa:59:82:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/disk.qcow2
I0304 04:06:23.150715   15858 main.go:141] libmachine: STDOUT: 
I0304 04:06:23.150731   15858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0304 04:06:23.150758   15858 fix.go:56] fixHost completed within 17.407875ms
I0304 04:06:23.150761   15858 start.go:83] releasing machines lock for "functional-682000", held for 17.418042ms
W0304 04:06:23.150768   15858 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0304 04:06:23.150807   15858 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0304 04:06:23.150812   15858 start.go:709] Will try again in 5 seconds ...
I0304 04:06:28.151569   15858 start.go:365] acquiring machines lock for functional-682000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0304 04:06:28.152041   15858 start.go:369] acquired machines lock for "functional-682000" in 394.5µs
I0304 04:06:28.152173   15858 start.go:96] Skipping create...Using existing machine configuration
I0304 04:06:28.152186   15858 fix.go:54] fixHost starting: 
I0304 04:06:28.152903   15858 fix.go:102] recreateIfNeeded on functional-682000: state=Stopped err=<nil>
W0304 04:06:28.152924   15858 fix.go:128] unexpected machine state, will restart: <nil>
I0304 04:06:28.157336   15858 out.go:177] * Restarting existing qemu2 VM for "functional-682000" ...
I0304 04:06:28.166475   15858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:4e:aa:59:82:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/functional-682000/disk.qcow2
I0304 04:06:28.176502   15858 main.go:141] libmachine: STDOUT: 
I0304 04:06:28.176587   15858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0304 04:06:28.176693   15858 fix.go:56] fixHost completed within 24.507541ms
I0304 04:06:28.176707   15858 start.go:83] releasing machines lock for "functional-682000", held for 24.648333ms
W0304 04:06:28.176942   15858 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-682000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0304 04:06:28.185301   15858 out.go:177] 
W0304 04:06:28.188328   15858 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0304 04:06:28.188349   15858 out.go:239] * 
W0304 04:06:28.190619   15858 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0304 04:06:28.198376   15858 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-682000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-682000 apply -f testdata/invalidsvc.yaml: exit status 1 (28.100625ms)

                                                
                                                
** stderr ** 
	error: context "functional-682000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-682000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-682000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-682000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-682000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-682000 --alsologtostderr -v=1] stderr:
I0304 04:07:23.237640   16211 out.go:291] Setting OutFile to fd 1 ...
I0304 04:07:23.238039   16211 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:07:23.238044   16211 out.go:304] Setting ErrFile to fd 2...
I0304 04:07:23.238046   16211 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:07:23.238200   16211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
I0304 04:07:23.238463   16211 mustload.go:65] Loading cluster: functional-682000
I0304 04:07:23.238661   16211 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0304 04:07:23.241937   16211 out.go:177] * The control plane node must be running for this command
I0304 04:07:23.245897   16211 out.go:177]   To start a cluster, run: "minikube start -p functional-682000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (45.157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 status: exit status 7 (32.529708ms)

                                                
                                                
-- stdout --
	functional-682000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-682000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (32.127ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-682000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 status -o json: exit status 7 (32.11ms)

                                                
                                                
-- stdout --
	{"Name":"functional-682000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-682000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (32.509042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-682000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-682000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.762708ms)

                                                
                                                
** stderr ** 
	error: context "functional-682000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-682000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-682000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-682000 describe po hello-node-connect: exit status 1 (26.883458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-682000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-682000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-682000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-682000 logs -l app=hello-node-connect: exit status 1 (26.566792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-682000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-682000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-682000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-682000 describe svc hello-node-connect: exit status 1 (26.577583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-682000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-682000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (32.36175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-682000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (32.460375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "echo hello": exit status 89 (44.5155ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-682000 ssh \"echo hello\"" : exit status 89
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-682000\"\n"*. args "out/minikube-darwin-arm64 -p functional-682000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "cat /etc/hostname": exit status 89 (49.833583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-682000 ssh \"cat /etc/hostname\"" : exit status 89
functional_test.go:1748: expected minikube ssh command output to be -"functional-682000"- but got *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-682000\"\n"*. args "out/minikube-darwin-arm64 -p functional-682000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (31.615709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 89 (58.44325ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-682000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 89
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh -n functional-682000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh -n functional-682000 "sudo cat /home/docker/cp-test.txt": exit status 89 (45.009958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-682000 ssh -n functional-682000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 89
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control plane node must be running for this command\n  To star",
+ 	"t a cluster, run: \"minikube start -p functional-682000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 cp functional-682000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd216136975/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 cp functional-682000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd216136975/001/cp-test.txt: exit status 89 (42.458792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-682000 cp functional-682000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd216136975/001/cp-test.txt" : exit status 89
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh -n functional-682000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh -n functional-682000 "sudo cat /home/docker/cp-test.txt": exit status 89 (42.884125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-682000 ssh -n functional-682000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 89
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd216136975/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-682000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 89 (48.55125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-682000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 89
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh -n functional-682000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh -n functional-682000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 89 (44.139084ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-682000 ssh -n functional-682000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 89
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control plane node must be running for this command\n  To star",
+ 	"t a cluster, run: \"minikube start -p functional-682000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/15486/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /etc/test/nested/copy/15486/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /etc/test/nested/copy/15486/hosts": exit status 89 (42.756ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /etc/test/nested/copy/15486/hosts" failed: exit status 89
functional_test.go:1932: file sync test content: * The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control plane node must be running for this command\n  To star",
+ 	"t a cluster, run: \"minikube start -p functional-682000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (32.791208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/15486.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /etc/ssl/certs/15486.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /etc/ssl/certs/15486.pem": exit status 89 (43.416ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/15486.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-682000 ssh \"sudo cat /etc/ssl/certs/15486.pem\"": exit status 89
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/15486.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-682000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/15486.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /usr/share/ca-certificates/15486.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /usr/share/ca-certificates/15486.pem": exit status 89 (39.686584ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/15486.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-682000 ssh \"sudo cat /usr/share/ca-certificates/15486.pem\"": exit status 89
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/15486.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-682000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 89 (47.683458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-682000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 89
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-682000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/154862.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /etc/ssl/certs/154862.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /etc/ssl/certs/154862.pem": exit status 89 (41.686709ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/154862.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-682000 ssh \"sudo cat /etc/ssl/certs/154862.pem\"": exit status 89
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/154862.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-682000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/154862.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /usr/share/ca-certificates/154862.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /usr/share/ca-certificates/154862.pem": exit status 89 (42.824291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/154862.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-682000 ssh \"sudo cat /usr/share/ca-certificates/154862.pem\"": exit status 89
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/154862.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-682000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 89 (43.57325ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-682000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 89
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control plane node must be running for this command
+ 	  To start a cluster, run: "minikube start -p functional-682000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (31.81775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-682000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-682000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.018375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-682000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-682000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-682000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-682000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-682000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-682000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-682000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-682000 -n functional-682000: exit status 7 (33.254833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-682000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "sudo systemctl is-active crio": exit status 89 (40.466666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --: exit status 89
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-682000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 version -o=json --components: exit status 89 (43.881708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 89
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-682000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-682000 image ls --format short --alsologtostderr:
I0304 04:07:23.655117   16226 out.go:291] Setting OutFile to fd 1 ...
I0304 04:07:23.655281   16226 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:07:23.655286   16226 out.go:304] Setting ErrFile to fd 2...
I0304 04:07:23.655288   16226 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:07:23.655423   16226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
I0304 04:07:23.655818   16226 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0304 04:07:23.655878   16226 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-682000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-682000 image ls --format table --alsologtostderr:
I0304 04:07:23.768480   16232 out.go:291] Setting OutFile to fd 1 ...
I0304 04:07:23.768622   16232 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:07:23.768627   16232 out.go:304] Setting ErrFile to fd 2...
I0304 04:07:23.768629   16232 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:07:23.768765   16232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
I0304 04:07:23.769172   16232 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0304 04:07:23.769231   16232 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-682000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-682000 image ls --format json --alsologtostderr:
I0304 04:07:23.730087   16230 out.go:291] Setting OutFile to fd 1 ...
I0304 04:07:23.730257   16230 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:07:23.730260   16230 out.go:304] Setting ErrFile to fd 2...
I0304 04:07:23.730262   16230 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:07:23.730396   16230 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
I0304 04:07:23.730835   16230 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0304 04:07:23.730899   16230 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-682000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-682000 image ls --format yaml --alsologtostderr:
I0304 04:07:23.691802   16228 out.go:291] Setting OutFile to fd 1 ...
I0304 04:07:23.691975   16228 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:07:23.691978   16228 out.go:304] Setting ErrFile to fd 2...
I0304 04:07:23.691980   16228 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:07:23.692102   16228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
I0304 04:07:23.692520   16228 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0304 04:07:23.692579   16228 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh pgrep buildkitd: exit status 89 (41.646666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image build -t localhost/my-image:functional-682000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-682000 image build -t localhost/my-image:functional-682000 testdata/build --alsologtostderr:
I0304 04:07:23.847549   16236 out.go:291] Setting OutFile to fd 1 ...
I0304 04:07:23.847947   16236 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:07:23.847952   16236 out.go:304] Setting ErrFile to fd 2...
I0304 04:07:23.847955   16236 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:07:23.848106   16236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
I0304 04:07:23.848521   16236 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0304 04:07:23.848931   16236 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0304 04:07:23.849188   16236 build_images.go:123] succeeded building to: 
I0304 04:07:23.849192   16236 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image ls
functional_test.go:442: expected "localhost/my-image:functional-682000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-682000 docker-env) && out/minikube-darwin-arm64 status -p functional-682000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-682000 docker-env) && out/minikube-darwin-arm64 status -p functional-682000": exit status 1 (52.022417ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 update-context --alsologtostderr -v=2: exit status 89 (43.885584ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:07:23.523232   16220 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:07:23.523622   16220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:07:23.523626   16220 out.go:304] Setting ErrFile to fd 2...
	I0304 04:07:23.523629   16220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:07:23.523798   16220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:07:23.524020   16220 mustload.go:65] Loading cluster: functional-682000
	I0304 04:07:23.524233   16220 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:07:23.528718   16220 out.go:177] * The control plane node must be running for this command
	I0304 04:07:23.531852   16220 out.go:177]   To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-682000 update-context --alsologtostderr -v=2": exit status 89
functional_test.go:2122: update-context: got="* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-682000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 update-context --alsologtostderr -v=2: exit status 89 (44.496833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:07:23.610298   16224 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:07:23.610469   16224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:07:23.610473   16224 out.go:304] Setting ErrFile to fd 2...
	I0304 04:07:23.610475   16224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:07:23.610603   16224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:07:23.610825   16224 mustload.go:65] Loading cluster: functional-682000
	I0304 04:07:23.611027   16224 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:07:23.615689   16224 out.go:177] * The control plane node must be running for this command
	I0304 04:07:23.619878   16224 out.go:177]   To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-682000 update-context --alsologtostderr -v=2": exit status 89
functional_test.go:2122: update-context: got="* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-682000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 update-context --alsologtostderr -v=2: exit status 89 (42.668333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:07:23.566989   16222 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:07:23.567137   16222 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:07:23.567140   16222 out.go:304] Setting ErrFile to fd 2...
	I0304 04:07:23.567143   16222 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:07:23.567270   16222 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:07:23.567495   16222 mustload.go:65] Loading cluster: functional-682000
	I0304 04:07:23.567675   16222 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:07:23.571769   16222 out.go:177] * The control plane node must be running for this command
	I0304 04:07:23.574760   16222 out.go:177]   To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-682000 update-context --alsologtostderr -v=2": exit status 89
functional_test.go:2122: update-context: got="* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-682000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-682000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-682000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.597875ms)

                                                
                                                
** stderr ** 
	error: context "functional-682000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-682000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 service list: exit status 89 (45.808375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-682000 service list" : exit status 89
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-682000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 service list -o json: exit status 89 (44.894167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-682000 service list -o json": exit status 89
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 service --namespace=default --https --url hello-node: exit status 89 (53.847208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-682000 service --namespace=default --https --url hello-node" : exit status 89
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 service hello-node --url --format={{.IP}}: exit status 89 (43.833833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-682000 service hello-node --url --format={{.IP}}": exit status 89
functional_test.go:1544: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-682000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 service hello-node --url: exit status 89 (44.789166ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-682000 service hello-node --url": exit status 89
functional_test.go:1561: found endpoint for hello-node: * The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
functional_test.go:1565: failed to parse "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-682000\"": parse "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p functional-682000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-682000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-682000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 89. stderr: I0304 04:06:31.100753   15981 out.go:291] Setting OutFile to fd 1 ...
I0304 04:06:31.100911   15981 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:06:31.100914   15981 out.go:304] Setting ErrFile to fd 2...
I0304 04:06:31.100916   15981 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:06:31.101036   15981 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
I0304 04:06:31.101256   15981 mustload.go:65] Loading cluster: functional-682000
I0304 04:06:31.101454   15981 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0304 04:06:31.106191   15981 out.go:177] * The control plane node must be running for this command
I0304 04:06:31.118237   15981 out.go:177]   To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
stdout: * The control plane node must be running for this command
To start a cluster, run: "minikube start -p functional-682000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-682000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 15982: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-682000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-682000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-682000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-682000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-682000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-682000": client config: context "functional-682000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (89.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-682000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-682000 get svc nginx-svc: exit status 1 (69.216125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-682000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-682000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (89.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image load --daemon gcr.io/google-containers/addon-resizer:functional-682000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-682000 image load --daemon gcr.io/google-containers/addon-resizer:functional-682000 --alsologtostderr: (1.487148209s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-682000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image load --daemon gcr.io/google-containers/addon-resizer:functional-682000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-682000 image load --daemon gcr.io/google-containers/addon-resizer:functional-682000 --alsologtostderr: (1.368013125s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-682000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.2069565s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-682000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image load --daemon gcr.io/google-containers/addon-resizer:functional-682000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-682000 image load --daemon gcr.io/google-containers/addon-resizer:functional-682000 --alsologtostderr: (1.214527791s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-682000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image save gcr.io/google-containers/addon-resizer:functional-682000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-682000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.023601167s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (35.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (35.44s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-728000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-728000 --driver=qemu2 : exit status 80 (9.890106333s)

                                                
                                                
-- stdout --
	* [image-728000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node image-728000 in cluster image-728000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-728000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-728000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-728000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-728000 -n image-728000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-728000 -n image-728000: exit status 7 (69.6555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-728000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.96s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (33.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-277000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ingress-addon-legacy-277000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (33.155054292s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-277000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node ingress-addon-legacy-277000 in cluster ingress-addon-legacy-277000
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ingress-addon-legacy-277000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:09:11.599319   16321 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:09:11.599467   16321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:09:11.599471   16321 out.go:304] Setting ErrFile to fd 2...
	I0304 04:09:11.599473   16321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:09:11.599614   16321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:09:11.600659   16321 out.go:298] Setting JSON to false
	I0304 04:09:11.616824   16321 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9523,"bootTime":1709544628,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:09:11.616891   16321 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:09:11.623058   16321 out.go:177] * [ingress-addon-legacy-277000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:09:11.632249   16321 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:09:11.632310   16321 notify.go:220] Checking for updates...
	I0304 04:09:11.640161   16321 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:09:11.643210   16321 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:09:11.646116   16321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:09:11.649169   16321 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:09:11.652226   16321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:09:11.655382   16321 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:09:11.660200   16321 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:09:11.667166   16321 start.go:299] selected driver: qemu2
	I0304 04:09:11.667173   16321 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:09:11.667180   16321 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:09:11.669404   16321 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:09:11.673165   16321 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:09:11.676317   16321 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:09:11.676369   16321 cni.go:84] Creating CNI manager for ""
	I0304 04:09:11.676378   16321 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0304 04:09:11.676382   16321 start_flags.go:323] config:
	{Name:ingress-addon-legacy-277000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-277000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:09:11.681103   16321 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:09:11.688134   16321 out.go:177] * Starting control plane node ingress-addon-legacy-277000 in cluster ingress-addon-legacy-277000
	I0304 04:09:11.692191   16321 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0304 04:09:12.344465   16321 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0304 04:09:12.344531   16321 cache.go:56] Caching tarball of preloaded images
	I0304 04:09:12.345191   16321 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0304 04:09:12.349767   16321 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0304 04:09:12.353638   16321 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:09:13.049666   16321 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0304 04:09:33.388893   16321 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:09:33.389076   16321 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:09:34.136431   16321 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0304 04:09:34.136633   16321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/ingress-addon-legacy-277000/config.json ...
	I0304 04:09:34.136649   16321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/ingress-addon-legacy-277000/config.json: {Name:mk7abdc45edaf8dba26bc89e8399bbe4c385c6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:09:34.136908   16321 start.go:365] acquiring machines lock for ingress-addon-legacy-277000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:09:34.136948   16321 start.go:369] acquired machines lock for "ingress-addon-legacy-277000" in 31.708µs
	I0304 04:09:34.136958   16321 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-277000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-277000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:09:34.136997   16321 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:09:34.142040   16321 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0304 04:09:34.157615   16321 start.go:159] libmachine.API.Create for "ingress-addon-legacy-277000" (driver="qemu2")
	I0304 04:09:34.157648   16321 client.go:168] LocalClient.Create starting
	I0304 04:09:34.157718   16321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:09:34.157749   16321 main.go:141] libmachine: Decoding PEM data...
	I0304 04:09:34.157759   16321 main.go:141] libmachine: Parsing certificate...
	I0304 04:09:34.157796   16321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:09:34.157817   16321 main.go:141] libmachine: Decoding PEM data...
	I0304 04:09:34.157825   16321 main.go:141] libmachine: Parsing certificate...
	I0304 04:09:34.158169   16321 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:09:34.912917   16321 main.go:141] libmachine: Creating SSH key...
	I0304 04:09:34.996068   16321 main.go:141] libmachine: Creating Disk image...
	I0304 04:09:34.996076   16321 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:09:34.996266   16321 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/disk.qcow2
	I0304 04:09:35.008650   16321 main.go:141] libmachine: STDOUT: 
	I0304 04:09:35.008670   16321 main.go:141] libmachine: STDERR: 
	I0304 04:09:35.008736   16321 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/disk.qcow2 +20000M
	I0304 04:09:35.019606   16321 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:09:35.019621   16321 main.go:141] libmachine: STDERR: 
	I0304 04:09:35.019638   16321 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/disk.qcow2
	I0304 04:09:35.019645   16321 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:09:35.019682   16321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:e9:7c:ee:92:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/disk.qcow2
	I0304 04:09:35.021538   16321 main.go:141] libmachine: STDOUT: 
	I0304 04:09:35.021551   16321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:09:35.021572   16321 client.go:171] LocalClient.Create took 863.924042ms
	I0304 04:09:37.023750   16321 start.go:128] duration metric: createHost completed in 2.886748917s
	I0304 04:09:37.023847   16321 start.go:83] releasing machines lock for "ingress-addon-legacy-277000", held for 2.886907084s
	W0304 04:09:37.023948   16321 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:09:37.031142   16321 out.go:177] * Deleting "ingress-addon-legacy-277000" in qemu2 ...
	W0304 04:09:37.070320   16321 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:09:37.070346   16321 start.go:709] Will try again in 5 seconds ...
	I0304 04:09:42.072586   16321 start.go:365] acquiring machines lock for ingress-addon-legacy-277000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:09:42.073105   16321 start.go:369] acquired machines lock for "ingress-addon-legacy-277000" in 372.541µs
	I0304 04:09:42.073230   16321 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-277000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-277000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:09:42.073522   16321 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:09:42.084131   16321 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0304 04:09:42.127632   16321 start.go:159] libmachine.API.Create for "ingress-addon-legacy-277000" (driver="qemu2")
	I0304 04:09:42.127684   16321 client.go:168] LocalClient.Create starting
	I0304 04:09:42.127796   16321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:09:42.127852   16321 main.go:141] libmachine: Decoding PEM data...
	I0304 04:09:42.127870   16321 main.go:141] libmachine: Parsing certificate...
	I0304 04:09:42.127944   16321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:09:42.127985   16321 main.go:141] libmachine: Decoding PEM data...
	I0304 04:09:42.127996   16321 main.go:141] libmachine: Parsing certificate...
	I0304 04:09:42.128490   16321 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:09:42.510299   16321 main.go:141] libmachine: Creating SSH key...
	I0304 04:09:42.626189   16321 main.go:141] libmachine: Creating Disk image...
	I0304 04:09:42.626200   16321 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:09:42.626408   16321 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/disk.qcow2
	I0304 04:09:42.639074   16321 main.go:141] libmachine: STDOUT: 
	I0304 04:09:42.639104   16321 main.go:141] libmachine: STDERR: 
	I0304 04:09:42.639169   16321 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/disk.qcow2 +20000M
	I0304 04:09:42.650148   16321 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:09:42.650160   16321 main.go:141] libmachine: STDERR: 
	I0304 04:09:42.650183   16321 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/disk.qcow2
	I0304 04:09:42.650195   16321 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:09:42.650236   16321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:ff:b5:43:ba:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/ingress-addon-legacy-277000/disk.qcow2
	I0304 04:09:42.652026   16321 main.go:141] libmachine: STDOUT: 
	I0304 04:09:42.652041   16321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:09:42.652061   16321 client.go:171] LocalClient.Create took 524.372542ms
	I0304 04:09:44.653938   16321 start.go:128] duration metric: createHost completed in 2.580391416s
	I0304 04:09:44.654096   16321 start.go:83] releasing machines lock for "ingress-addon-legacy-277000", held for 2.580979292s
	W0304 04:09:44.654452   16321 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ingress-addon-legacy-277000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ingress-addon-legacy-277000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:09:44.665303   16321 out.go:177] 
	W0304 04:09:44.675481   16321 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:09:44.675546   16321 out.go:239] * 
	* 
	W0304 04:09:44.678026   16321 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:09:44.686237   16321 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-arm64 start -p ingress-addon-legacy-277000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (33.16s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (0.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-277000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ingress-addon-legacy-277000 addons enable ingress --alsologtostderr -v=5: exit status 10 (90.3455ms)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:09:44.777607   16347 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:09:44.779017   16347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:09:44.779022   16347 out.go:304] Setting ErrFile to fd 2...
	I0304 04:09:44.779025   16347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:09:44.779234   16347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:09:44.779531   16347 mustload.go:65] Loading cluster: ingress-addon-legacy-277000
	I0304 04:09:44.779789   16347 config.go:182] Loaded profile config "ingress-addon-legacy-277000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0304 04:09:44.779820   16347 addons.go:597] checking whether the cluster is paused
	I0304 04:09:44.779886   16347 config.go:182] Loaded profile config "ingress-addon-legacy-277000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0304 04:09:44.779893   16347 host.go:66] Checking if "ingress-addon-legacy-277000" exists ...
	I0304 04:09:44.783931   16347 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0304 04:09:44.787789   16347 config.go:182] Loaded profile config "ingress-addon-legacy-277000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0304 04:09:44.787799   16347 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-277000"
	I0304 04:09:44.787805   16347 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-277000"
	I0304 04:09:44.787837   16347 host.go:66] Checking if "ingress-addon-legacy-277000" exists ...
	W0304 04:09:44.788080   16347 host.go:58] "ingress-addon-legacy-277000" host status: Stopped
	W0304 04:09:44.788086   16347 addons.go:280] "ingress-addon-legacy-277000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0304 04:09:44.788091   16347 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-277000"
	I0304 04:09:44.791770   16347 out.go:177] * Verifying ingress addon...
	I0304 04:09:44.794885   16347 loader.go:141] Config not found: /Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:09:44.798786   16347 out.go:177] 
	W0304 04:09:44.801950   16347 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-277000" does not exist: client config: context "ingress-addon-legacy-277000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-277000" does not exist: client config: context "ingress-addon-legacy-277000" does not exist]
	W0304 04:09:44.801957   16347 out.go:239] * 
	* 
	W0304 04:09:44.805668   16347 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:09:44.809808   16347 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-277000 -n ingress-addon-legacy-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-277000 -n ingress-addon-legacy-277000: exit status 7 (36.109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (0.13s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.03s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-277000 -n ingress-addon-legacy-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-277000 -n ingress-addon-legacy-277000: exit status 7 (31.837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-277000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-883000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-883000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.861107792s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"37cacccc-52d0-4fbd-8bdd-6ddaa093bb2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-883000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b89a3cf-c194-443a-a428-451e42d4a8ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18284"}}
	{"specversion":"1.0","id":"aa7f5fa8-aa2c-428e-99b7-bfcc7485c3a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig"}}
	{"specversion":"1.0","id":"51466adb-0a33-4543-b50e-e8d9523076f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4722204c-5b0c-47b1-8470-97896a0611c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ad1bcfb0-7cee-48d1-b3e3-406a9531e4fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube"}}
	{"specversion":"1.0","id":"c7699431-71bd-40d2-a713-e2df2780262b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2459d999-eb43-4ff1-9bb9-fb486e7fb55b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc197c91-69dc-4d54-8913-f5406ba47f5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"12070588-647d-4ff0-a1df-5393c0fc05c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node json-output-883000 in cluster json-output-883000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"09f3de8a-e819-4d67-b173-8bf55e82d12e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"198ee7d7-27a4-4e4e-8b30-3af7f34abaa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-883000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e33e856-235d-48a5-bc99-4428e88bc4da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"5c1cd441-8406-45da-a97b-55de71828e8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"69cb75e7-6618-479e-8569-8eec69a39970","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-883000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"80d226d6-596b-4177-b722-4e9de4637b48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"db5c37a9-6119-4d0b-9c02-f36fbad999a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-883000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.86s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-883000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-883000 --output=json --user=testUser: exit status 89 (82.123417ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"79b5b2cd-4555-4be1-a3cd-701af4c5d8ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control plane node must be running for this command"}}
	{"specversion":"1.0","id":"dab080fc-af33-403f-a9f4-b2d69f61a915","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-883000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-883000 --output=json --user=testUser": exit status 89
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-883000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-883000 --output=json --user=testUser: exit status 89 (49.113959ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p json-output-883000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-883000 --output=json --user=testUser": exit status 89
json_output_test.go:213: unable to marshal output: * The control plane node must be running for this command
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-297000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-297000 --driver=qemu2 : exit status 80 (9.852011667s)

                                                
                                                
-- stdout --
	* [first-297000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node first-297000 in cluster first-297000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-297000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-297000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-297000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-04 04:10:05.406885 -0800 PST m=+352.001064335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-298000 -n second-298000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-298000 -n second-298000: exit status 85 (81.923708ms)

                                                
                                                
-- stdout --
	* Profile "second-298000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-298000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-298000" host is not running, skipping log retrieval (state="* Profile \"second-298000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-298000\"")
helpers_test.go:175: Cleaning up "second-298000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-298000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-04 04:10:05.731313 -0800 PST m=+352.325494460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-297000 -n first-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-297000 -n first-297000: exit status 7 (32.136042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-297000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-297000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-297000
--- FAIL: TestMinikubeProfile (10.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (11.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-255000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-255000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (11.015094625s)

                                                
                                                
-- stdout --
	* [mount-start-1-255000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-255000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-255000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-255000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-255000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-255000 -n mount-start-1-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-255000 -n mount-start-1-255000: exit status 7 (67.034625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-255000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (11.08s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-386000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-386000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.02486775s)

                                                
                                                
-- stdout --
	* [multinode-386000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-386000 in cluster multinode-386000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-386000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:10:17.322495   16476 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:10:17.322621   16476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:10:17.322624   16476 out.go:304] Setting ErrFile to fd 2...
	I0304 04:10:17.322627   16476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:10:17.322746   16476 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:10:17.323873   16476 out.go:298] Setting JSON to false
	I0304 04:10:17.340145   16476 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9589,"bootTime":1709544628,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:10:17.340212   16476 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:10:17.347460   16476 out.go:177] * [multinode-386000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:10:17.355420   16476 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:10:17.359527   16476 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:10:17.355462   16476 notify.go:220] Checking for updates...
	I0304 04:10:17.365434   16476 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:10:17.368459   16476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:10:17.369953   16476 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:10:17.373432   16476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:10:17.376613   16476 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:10:17.380296   16476 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:10:17.387408   16476 start.go:299] selected driver: qemu2
	I0304 04:10:17.387416   16476 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:10:17.387421   16476 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:10:17.389740   16476 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:10:17.392542   16476 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:10:17.395537   16476 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:10:17.395579   16476 cni.go:84] Creating CNI manager for ""
	I0304 04:10:17.395584   16476 cni.go:136] 0 nodes found, recommending kindnet
	I0304 04:10:17.395593   16476 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0304 04:10:17.395599   16476 start_flags.go:323] config:
	{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-386000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:10:17.400332   16476 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:10:17.406352   16476 out.go:177] * Starting control plane node multinode-386000 in cluster multinode-386000
	I0304 04:10:17.410411   16476 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:10:17.410427   16476 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:10:17.410444   16476 cache.go:56] Caching tarball of preloaded images
	I0304 04:10:17.410506   16476 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:10:17.410523   16476 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:10:17.410752   16476 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/multinode-386000/config.json ...
	I0304 04:10:17.410766   16476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/multinode-386000/config.json: {Name:mkabfcc375a8a8fc70b9ff657a3c51bf6975351a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:10:17.410990   16476 start.go:365] acquiring machines lock for multinode-386000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:10:17.411023   16476 start.go:369] acquired machines lock for "multinode-386000" in 27.125µs
	I0304 04:10:17.411035   16476 start.go:93] Provisioning new machine with config: &{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-386000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:10:17.411069   16476 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:10:17.418430   16476 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:10:17.435904   16476 start.go:159] libmachine.API.Create for "multinode-386000" (driver="qemu2")
	I0304 04:10:17.435927   16476 client.go:168] LocalClient.Create starting
	I0304 04:10:17.435988   16476 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:10:17.436019   16476 main.go:141] libmachine: Decoding PEM data...
	I0304 04:10:17.436028   16476 main.go:141] libmachine: Parsing certificate...
	I0304 04:10:17.436072   16476 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:10:17.436096   16476 main.go:141] libmachine: Decoding PEM data...
	I0304 04:10:17.436103   16476 main.go:141] libmachine: Parsing certificate...
	I0304 04:10:17.436484   16476 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:10:17.579155   16476 main.go:141] libmachine: Creating SSH key...
	I0304 04:10:17.780372   16476 main.go:141] libmachine: Creating Disk image...
	I0304 04:10:17.780381   16476 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:10:17.780581   16476 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2
	I0304 04:10:17.793145   16476 main.go:141] libmachine: STDOUT: 
	I0304 04:10:17.793162   16476 main.go:141] libmachine: STDERR: 
	I0304 04:10:17.793212   16476 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2 +20000M
	I0304 04:10:17.803780   16476 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:10:17.803796   16476 main.go:141] libmachine: STDERR: 
	I0304 04:10:17.803813   16476 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2
	I0304 04:10:17.803818   16476 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:10:17.803848   16476 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:31:78:f9:71:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2
	I0304 04:10:17.805583   16476 main.go:141] libmachine: STDOUT: 
	I0304 04:10:17.805597   16476 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:10:17.805616   16476 client.go:171] LocalClient.Create took 369.685708ms
	I0304 04:10:19.807869   16476 start.go:128] duration metric: createHost completed in 2.396779s
	I0304 04:10:19.807966   16476 start.go:83] releasing machines lock for "multinode-386000", held for 2.396947375s
	W0304 04:10:19.808021   16476 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:10:19.821136   16476 out.go:177] * Deleting "multinode-386000" in qemu2 ...
	W0304 04:10:19.858808   16476 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:10:19.858842   16476 start.go:709] Will try again in 5 seconds ...
	I0304 04:10:24.860998   16476 start.go:365] acquiring machines lock for multinode-386000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:10:24.861475   16476 start.go:369] acquired machines lock for "multinode-386000" in 349.083µs
	I0304 04:10:24.861607   16476 start.go:93] Provisioning new machine with config: &{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-386000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:10:24.861882   16476 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:10:24.874468   16476 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:10:24.923078   16476 start.go:159] libmachine.API.Create for "multinode-386000" (driver="qemu2")
	I0304 04:10:24.923117   16476 client.go:168] LocalClient.Create starting
	I0304 04:10:24.923235   16476 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:10:24.923305   16476 main.go:141] libmachine: Decoding PEM data...
	I0304 04:10:24.923326   16476 main.go:141] libmachine: Parsing certificate...
	I0304 04:10:24.923397   16476 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:10:24.923438   16476 main.go:141] libmachine: Decoding PEM data...
	I0304 04:10:24.923454   16476 main.go:141] libmachine: Parsing certificate...
	I0304 04:10:24.924000   16476 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:10:25.078377   16476 main.go:141] libmachine: Creating SSH key...
	I0304 04:10:25.241570   16476 main.go:141] libmachine: Creating Disk image...
	I0304 04:10:25.241576   16476 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:10:25.241791   16476 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2
	I0304 04:10:25.254286   16476 main.go:141] libmachine: STDOUT: 
	I0304 04:10:25.254305   16476 main.go:141] libmachine: STDERR: 
	I0304 04:10:25.254357   16476 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2 +20000M
	I0304 04:10:25.264926   16476 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:10:25.264942   16476 main.go:141] libmachine: STDERR: 
	I0304 04:10:25.264957   16476 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2
	I0304 04:10:25.264965   16476 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:10:25.265003   16476 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ff:d8:4b:02:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2
	I0304 04:10:25.266684   16476 main.go:141] libmachine: STDOUT: 
	I0304 04:10:25.266699   16476 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:10:25.266714   16476 client.go:171] LocalClient.Create took 343.593375ms
	I0304 04:10:27.268874   16476 start.go:128] duration metric: createHost completed in 2.406973458s
	I0304 04:10:27.268942   16476 start.go:83] releasing machines lock for "multinode-386000", held for 2.407454333s
	W0304 04:10:27.269384   16476 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-386000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-386000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:10:27.286034   16476 out.go:177] 
	W0304 04:10:27.289090   16476 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:10:27.289116   16476 out.go:239] * 
	* 
	W0304 04:10:27.291573   16476 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:10:27.302985   16476 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:88: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-386000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (69.746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (115.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:509: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (61.961042ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-386000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:511: failed to create busybox deployment to multinode cluster
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- rollout status deployment/busybox: exit status 1 (58.502083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:516: failed to deploy busybox to multinode cluster
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.907708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.008458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.240333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.349833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.289333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.355ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.733917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.405875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.2245ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.497917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.236542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:540: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:544: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.180667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:546: failed get Pod names
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- exec  -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.433084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:554: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- exec  -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.939083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:564: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.65725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:572: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (32.243583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (115.66s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:580: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-386000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.103042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:582: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (32.325625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-386000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-386000 -v 3 --alsologtostderr: exit status 89 (45.343458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-386000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:12:23.172081   16611 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:12:23.172429   16611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:23.172438   16611 out.go:304] Setting ErrFile to fd 2...
	I0304 04:12:23.172440   16611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:23.172641   16611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:12:23.172863   16611 mustload.go:65] Loading cluster: multinode-386000
	I0304 04:12:23.173055   16611 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:12:23.177828   16611 out.go:177] * The control plane node must be running for this command
	I0304 04:12:23.181910   16611 out.go:177]   To start a cluster, run: "minikube start -p multinode-386000"

                                                
                                                
** /stderr **
multinode_test.go:113: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-386000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (32.058875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-386000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:211: (dbg) Non-zero exit: kubectl --context multinode-386000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.308ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-386000

                                                
                                                
** /stderr **
multinode_test.go:213: failed to 'kubectl get nodes' with args "kubectl --context multinode-386000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:220: failed to decode json from label list: args "kubectl --context multinode-386000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (33.035583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:156: expected profile "multinode-386000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-386000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-386000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-386000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (32.252916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status --output json --alsologtostderr: exit status 7 (32.383ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-386000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:12:23.414728   16624 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:12:23.414885   16624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:23.414888   16624 out.go:304] Setting ErrFile to fd 2...
	I0304 04:12:23.414891   16624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:23.415019   16624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:12:23.415139   16624 out.go:298] Setting JSON to true
	I0304 04:12:23.415150   16624 mustload.go:65] Loading cluster: multinode-386000
	I0304 04:12:23.415211   16624 notify.go:220] Checking for updates...
	I0304 04:12:23.415365   16624 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:12:23.415371   16624 status.go:255] checking status of multinode-386000 ...
	I0304 04:12:23.415568   16624 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0304 04:12:23.415571   16624 status.go:343] host is not running, skipping remaining checks
	I0304 04:12:23.415574   16624 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:181: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-386000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (31.605666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 node stop m03
multinode_test.go:238: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 node stop m03: exit status 85 (48.728834ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:240: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-386000 node stop m03": exit status 85
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status: exit status 7 (32.676042ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr: exit status 7 (32.415208ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:12:23.560976   16632 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:12:23.561121   16632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:23.561124   16632 out.go:304] Setting ErrFile to fd 2...
	I0304 04:12:23.561127   16632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:23.561264   16632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:12:23.561393   16632 out.go:298] Setting JSON to false
	I0304 04:12:23.561403   16632 mustload.go:65] Loading cluster: multinode-386000
	I0304 04:12:23.561466   16632 notify.go:220] Checking for updates...
	I0304 04:12:23.561611   16632 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:12:23.561618   16632 status.go:255] checking status of multinode-386000 ...
	I0304 04:12:23.561820   16632 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0304 04:12:23.561824   16632 status.go:343] host is not running, skipping remaining checks
	I0304 04:12:23.561826   16632 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:257: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr": multinode-386000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (32.259583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 node start m03 --alsologtostderr: exit status 85 (50.036333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:12:23.625357   16636 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:12:23.625724   16636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:23.625728   16636 out.go:304] Setting ErrFile to fd 2...
	I0304 04:12:23.625730   16636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:23.625857   16636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:12:23.626058   16636 mustload.go:65] Loading cluster: multinode-386000
	I0304 04:12:23.626243   16636 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:12:23.630404   16636 out.go:177] 
	W0304 04:12:23.634525   16636 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0304 04:12:23.634530   16636 out.go:239] * 
	* 
	W0304 04:12:23.636713   16636 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:12:23.640480   16636 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0304 04:12:23.625357   16636 out.go:291] Setting OutFile to fd 1 ...
I0304 04:12:23.625724   16636 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:12:23.625728   16636 out.go:304] Setting ErrFile to fd 2...
I0304 04:12:23.625730   16636 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0304 04:12:23.625857   16636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
I0304 04:12:23.626058   16636 mustload.go:65] Loading cluster: multinode-386000
I0304 04:12:23.626243   16636 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0304 04:12:23.630404   16636 out.go:177] 
W0304 04:12:23.634525   16636 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0304 04:12:23.634530   16636 out.go:239] * 
* 
W0304 04:12:23.636713   16636 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0304 04:12:23.640480   16636 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-386000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status: exit status 7 (32.6725ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:291: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-386000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (32.495667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-386000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-386000
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-386000 --wait=true -v=8 --alsologtostderr
multinode_test.go:323: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-386000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.181569583s)

                                                
                                                
-- stdout --
	* [multinode-386000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-386000 in cluster multinode-386000
	* Restarting existing qemu2 VM for "multinode-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:12:23.836931   16646 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:12:23.837065   16646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:23.837068   16646 out.go:304] Setting ErrFile to fd 2...
	I0304 04:12:23.837070   16646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:23.837189   16646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:12:23.838179   16646 out.go:298] Setting JSON to false
	I0304 04:12:23.854426   16646 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9715,"bootTime":1709544628,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:12:23.854492   16646 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:12:23.859571   16646 out.go:177] * [multinode-386000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:12:23.866489   16646 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:12:23.866563   16646 notify.go:220] Checking for updates...
	I0304 04:12:23.869560   16646 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:12:23.870916   16646 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:12:23.873494   16646 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:12:23.876548   16646 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:12:23.879510   16646 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:12:23.882832   16646 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:12:23.882887   16646 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:12:23.887515   16646 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:12:23.894490   16646 start.go:299] selected driver: qemu2
	I0304 04:12:23.894501   16646 start.go:903] validating driver "qemu2" against &{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:multinode-386000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:12:23.894571   16646 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:12:23.896807   16646 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:12:23.896854   16646 cni.go:84] Creating CNI manager for ""
	I0304 04:12:23.896860   16646 cni.go:136] 1 nodes found, recommending kindnet
	I0304 04:12:23.896870   16646 start_flags.go:323] config:
	{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-386000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:12:23.901395   16646 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:12:23.907441   16646 out.go:177] * Starting control plane node multinode-386000 in cluster multinode-386000
	I0304 04:12:23.911513   16646 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:12:23.911536   16646 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:12:23.911546   16646 cache.go:56] Caching tarball of preloaded images
	I0304 04:12:23.911600   16646 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:12:23.911606   16646 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:12:23.911675   16646 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/multinode-386000/config.json ...
	I0304 04:12:23.912103   16646 start.go:365] acquiring machines lock for multinode-386000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:12:23.912134   16646 start.go:369] acquired machines lock for "multinode-386000" in 25.042µs
	I0304 04:12:23.912142   16646 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:12:23.912147   16646 fix.go:54] fixHost starting: 
	I0304 04:12:23.912250   16646 fix.go:102] recreateIfNeeded on multinode-386000: state=Stopped err=<nil>
	W0304 04:12:23.912258   16646 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:12:23.915451   16646 out.go:177] * Restarting existing qemu2 VM for "multinode-386000" ...
	I0304 04:12:23.923510   16646 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ff:d8:4b:02:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2
	I0304 04:12:23.925502   16646 main.go:141] libmachine: STDOUT: 
	I0304 04:12:23.925524   16646 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:12:23.925552   16646 fix.go:56] fixHost completed within 13.404542ms
	I0304 04:12:23.925556   16646 start.go:83] releasing machines lock for "multinode-386000", held for 13.417583ms
	W0304 04:12:23.925562   16646 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:12:23.925591   16646 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:12:23.925596   16646 start.go:709] Will try again in 5 seconds ...
	I0304 04:12:28.927697   16646 start.go:365] acquiring machines lock for multinode-386000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:12:28.928010   16646 start.go:369] acquired machines lock for "multinode-386000" in 243.583µs
	I0304 04:12:28.928124   16646 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:12:28.928144   16646 fix.go:54] fixHost starting: 
	I0304 04:12:28.928815   16646 fix.go:102] recreateIfNeeded on multinode-386000: state=Stopped err=<nil>
	W0304 04:12:28.928843   16646 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:12:28.939125   16646 out.go:177] * Restarting existing qemu2 VM for "multinode-386000" ...
	I0304 04:12:28.943391   16646 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ff:d8:4b:02:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2
	I0304 04:12:28.952907   16646 main.go:141] libmachine: STDOUT: 
	I0304 04:12:28.952992   16646 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:12:28.953150   16646 fix.go:56] fixHost completed within 24.983125ms
	I0304 04:12:28.953171   16646 start.go:83] releasing machines lock for "multinode-386000", held for 25.138875ms
	W0304 04:12:28.953364   16646 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-386000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-386000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:12:28.960222   16646 out.go:177] 
	W0304 04:12:28.964324   16646 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:12:28.964368   16646 out.go:239] * 
	* 
	W0304 04:12:28.966535   16646 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:12:28.976200   16646 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:325: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-386000" : exit status 80
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-386000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (35.901292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 node delete m03
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 node delete m03: exit status 89 (48.463333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-386000"

                                                
                                                
-- /stdout --
multinode_test.go:424: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-386000 node delete m03": exit status 89
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr
multinode_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr: exit status 7 (32.503416ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:12:29.173772   16662 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:12:29.173938   16662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:29.173942   16662 out.go:304] Setting ErrFile to fd 2...
	I0304 04:12:29.173944   16662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:29.174079   16662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:12:29.174194   16662 out.go:298] Setting JSON to false
	I0304 04:12:29.174211   16662 mustload.go:65] Loading cluster: multinode-386000
	I0304 04:12:29.174268   16662 notify.go:220] Checking for updates...
	I0304 04:12:29.174421   16662 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:12:29.174427   16662 status.go:255] checking status of multinode-386000 ...
	I0304 04:12:29.174630   16662 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0304 04:12:29.174633   16662 status.go:343] host is not running, skipping remaining checks
	I0304 04:12:29.174635   16662 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:430: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (32.743125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 stop
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status: exit status 7 (32.731541ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr: exit status 7 (32.056334ms)

                                                
                                                
-- stdout --
	multinode-386000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:12:29.332095   16670 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:12:29.332249   16670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:29.332253   16670 out.go:304] Setting ErrFile to fd 2...
	I0304 04:12:29.332255   16670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:29.332408   16670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:12:29.332536   16670 out.go:298] Setting JSON to false
	I0304 04:12:29.332547   16670 mustload.go:65] Loading cluster: multinode-386000
	I0304 04:12:29.332598   16670 notify.go:220] Checking for updates...
	I0304 04:12:29.332760   16670 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:12:29.332767   16670 status.go:255] checking status of multinode-386000 ...
	I0304 04:12:29.332968   16670 status.go:330] multinode-386000 host status = "Stopped" (err=<nil>)
	I0304 04:12:29.332972   16670 status.go:343] host is not running, skipping remaining checks
	I0304 04:12:29.332974   16670 status.go:257] multinode-386000 status: &{Name:multinode-386000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:361: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr": multinode-386000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:365: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-386000 status --alsologtostderr": multinode-386000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (32.516041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-386000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-386000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.195990041s)

                                                
                                                
-- stdout --
	* [multinode-386000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-386000 in cluster multinode-386000
	* Restarting existing qemu2 VM for "multinode-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:12:29.396608   16674 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:12:29.396730   16674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:29.396733   16674 out.go:304] Setting ErrFile to fd 2...
	I0304 04:12:29.396735   16674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:29.396848   16674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:12:29.397821   16674 out.go:298] Setting JSON to false
	I0304 04:12:29.413788   16674 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9721,"bootTime":1709544628,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:12:29.413848   16674 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:12:29.418124   16674 out.go:177] * [multinode-386000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:12:29.426072   16674 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:12:29.430082   16674 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:12:29.426142   16674 notify.go:220] Checking for updates...
	I0304 04:12:29.440055   16674 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:12:29.443078   16674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:12:29.446130   16674 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:12:29.449088   16674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:12:29.452403   16674 config.go:182] Loaded profile config "multinode-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:12:29.452659   16674 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:12:29.457091   16674 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:12:29.464050   16674 start.go:299] selected driver: qemu2
	I0304 04:12:29.464056   16674 start.go:903] validating driver "qemu2" against &{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:multinode-386000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:12:29.464111   16674 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:12:29.466344   16674 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:12:29.466383   16674 cni.go:84] Creating CNI manager for ""
	I0304 04:12:29.466388   16674 cni.go:136] 1 nodes found, recommending kindnet
	I0304 04:12:29.466393   16674 start_flags.go:323] config:
	{Name:multinode-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-386000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:12:29.470791   16674 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:12:29.478082   16674 out.go:177] * Starting control plane node multinode-386000 in cluster multinode-386000
	I0304 04:12:29.482067   16674 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:12:29.482097   16674 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:12:29.482105   16674 cache.go:56] Caching tarball of preloaded images
	I0304 04:12:29.482154   16674 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:12:29.482165   16674 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:12:29.482224   16674 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/multinode-386000/config.json ...
	I0304 04:12:29.482699   16674 start.go:365] acquiring machines lock for multinode-386000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:12:29.482728   16674 start.go:369] acquired machines lock for "multinode-386000" in 22.375µs
	I0304 04:12:29.482736   16674 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:12:29.482740   16674 fix.go:54] fixHost starting: 
	I0304 04:12:29.482867   16674 fix.go:102] recreateIfNeeded on multinode-386000: state=Stopped err=<nil>
	W0304 04:12:29.482876   16674 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:12:29.487051   16674 out.go:177] * Restarting existing qemu2 VM for "multinode-386000" ...
	I0304 04:12:29.495109   16674 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ff:d8:4b:02:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2
	I0304 04:12:29.497256   16674 main.go:141] libmachine: STDOUT: 
	I0304 04:12:29.497276   16674 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:12:29.497305   16674 fix.go:56] fixHost completed within 14.564042ms
	I0304 04:12:29.497310   16674 start.go:83] releasing machines lock for "multinode-386000", held for 14.578167ms
	W0304 04:12:29.497317   16674 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:12:29.497358   16674 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:12:29.497363   16674 start.go:709] Will try again in 5 seconds ...
	I0304 04:12:34.499496   16674 start.go:365] acquiring machines lock for multinode-386000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:12:34.500007   16674 start.go:369] acquired machines lock for "multinode-386000" in 413.042µs
	I0304 04:12:34.500147   16674 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:12:34.500172   16674 fix.go:54] fixHost starting: 
	I0304 04:12:34.500876   16674 fix.go:102] recreateIfNeeded on multinode-386000: state=Stopped err=<nil>
	W0304 04:12:34.500904   16674 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:12:34.506486   16674 out.go:177] * Restarting existing qemu2 VM for "multinode-386000" ...
	I0304 04:12:34.513472   16674 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ff:d8:4b:02:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/multinode-386000/disk.qcow2
	I0304 04:12:34.523472   16674 main.go:141] libmachine: STDOUT: 
	I0304 04:12:34.523539   16674 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:12:34.523617   16674 fix.go:56] fixHost completed within 23.446125ms
	I0304 04:12:34.523635   16674 start.go:83] releasing machines lock for "multinode-386000", held for 23.595958ms
	W0304 04:12:34.523916   16674 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-386000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-386000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:12:34.533346   16674 out.go:177] 
	W0304 04:12:34.537447   16674 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:12:34.537475   16674 out.go:239] * 
	* 
	W0304 04:12:34.540034   16674 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:12:34.548157   16674 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:384: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-386000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (69.959041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-386000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-386000-m01 --driver=qemu2 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-386000-m01 --driver=qemu2 : exit status 80 (10.654306166s)

                                                
                                                
-- stdout --
	* [multinode-386000-m01] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-386000-m01 in cluster multinode-386000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-386000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-386000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-386000-m02 --driver=qemu2 
multinode_test.go:488: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-386000-m02 --driver=qemu2 : exit status 80 (10.693206333s)

                                                
                                                
-- stdout --
	* [multinode-386000-m02] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-386000-m02 in cluster multinode-386000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-386000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-386000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:490: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-386000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-386000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-386000: exit status 89 (85.845417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-386000"

                                                
                                                
-- /stdout --
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-386000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-386000 -n multinode-386000: exit status 7 (32.968667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (21.61s)

                                                
                                    
x
+
TestPreload (10.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-438000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-438000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.981235125s)

                                                
                                                
-- stdout --
	* [test-preload-438000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-438000 in cluster test-preload-438000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-438000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:12:56.407957   16737 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:12:56.408072   16737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:56.408076   16737 out.go:304] Setting ErrFile to fd 2...
	I0304 04:12:56.408078   16737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:12:56.408206   16737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:12:56.409305   16737 out.go:298] Setting JSON to false
	I0304 04:12:56.425590   16737 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9748,"bootTime":1709544628,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:12:56.425658   16737 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:12:56.430533   16737 out.go:177] * [test-preload-438000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:12:56.438474   16737 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:12:56.443500   16737 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:12:56.438511   16737 notify.go:220] Checking for updates...
	I0304 04:12:56.448508   16737 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:12:56.451507   16737 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:12:56.454506   16737 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:12:56.457436   16737 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:12:56.460821   16737 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:12:56.460875   16737 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:12:56.465510   16737 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:12:56.472480   16737 start.go:299] selected driver: qemu2
	I0304 04:12:56.472492   16737 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:12:56.472500   16737 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:12:56.474866   16737 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:12:56.478515   16737 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:12:56.481506   16737 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:12:56.481555   16737 cni.go:84] Creating CNI manager for ""
	I0304 04:12:56.481563   16737 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:12:56.481568   16737 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:12:56.481580   16737 start_flags.go:323] config:
	{Name:test-preload-438000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:12:56.486062   16737 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:12:56.493358   16737 out.go:177] * Starting control plane node test-preload-438000 in cluster test-preload-438000
	I0304 04:12:56.497510   16737 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0304 04:12:56.497605   16737 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/test-preload-438000/config.json ...
	I0304 04:12:56.497630   16737 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/test-preload-438000/config.json: {Name:mk59303b599add60e1144eecec1c7caf1328b431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:12:56.497638   16737 cache.go:107] acquiring lock: {Name:mk7f58029d9b549ed1b53d9ce985d3e0b0f5f3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:12:56.497644   16737 cache.go:107] acquiring lock: {Name:mk9541e7cf9fdbcd7ed135c787d635f7a892e860 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:12:56.497752   16737 cache.go:107] acquiring lock: {Name:mk99d2cc3d36f205fab4274a4114be0452f3b223 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:12:56.497653   16737 cache.go:107] acquiring lock: {Name:mk7df2a186b87644b85554a6b93108cadf6687a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:12:56.497932   16737 cache.go:107] acquiring lock: {Name:mkb3af91e0889485c14118e485ef418e916a260a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:12:56.497991   16737 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0304 04:12:56.497996   16737 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:12:56.498004   16737 cache.go:107] acquiring lock: {Name:mk147f2654b31366bee1926f9356c42d7a3705a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:12:56.498003   16737 cache.go:107] acquiring lock: {Name:mk47590af15e28afc5cb2bf8aa77ac0a9b732e6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:12:56.498038   16737 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0304 04:12:56.497994   16737 start.go:365] acquiring machines lock for test-preload-438000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:12:56.498047   16737 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0304 04:12:56.498016   16737 cache.go:107] acquiring lock: {Name:mk6973a6b2bbd95a690c4fb7c140927ee5b3a758 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:12:56.498140   16737 start.go:369] acquired machines lock for "test-preload-438000" in 84.875µs
	I0304 04:12:56.498178   16737 start.go:93] Provisioning new machine with config: &{Name:test-preload-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:12:56.498220   16737 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:12:56.498261   16737 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:12:56.506481   16737 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:12:56.498282   16737 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0304 04:12:56.498301   16737 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0304 04:12:56.498317   16737 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0304 04:12:56.509850   16737 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:12:56.509989   16737 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0304 04:12:56.510027   16737 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:12:56.510259   16737 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0304 04:12:56.512701   16737 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0304 04:12:56.512721   16737 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0304 04:12:56.512770   16737 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0304 04:12:56.512774   16737 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0304 04:12:56.525449   16737 start.go:159] libmachine.API.Create for "test-preload-438000" (driver="qemu2")
	I0304 04:12:56.525467   16737 client.go:168] LocalClient.Create starting
	I0304 04:12:56.525569   16737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:12:56.525600   16737 main.go:141] libmachine: Decoding PEM data...
	I0304 04:12:56.525611   16737 main.go:141] libmachine: Parsing certificate...
	I0304 04:12:56.525656   16737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:12:56.525680   16737 main.go:141] libmachine: Decoding PEM data...
	I0304 04:12:56.525689   16737 main.go:141] libmachine: Parsing certificate...
	I0304 04:12:56.526065   16737 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:12:56.850288   16737 main.go:141] libmachine: Creating SSH key...
	I0304 04:12:56.948308   16737 main.go:141] libmachine: Creating Disk image...
	I0304 04:12:56.948315   16737 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:12:56.948500   16737 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/disk.qcow2
	I0304 04:12:56.960602   16737 main.go:141] libmachine: STDOUT: 
	I0304 04:12:56.960633   16737 main.go:141] libmachine: STDERR: 
	I0304 04:12:56.960682   16737 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/disk.qcow2 +20000M
	I0304 04:12:56.971640   16737 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:12:56.971656   16737 main.go:141] libmachine: STDERR: 
	I0304 04:12:56.971670   16737 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/disk.qcow2
	I0304 04:12:56.971674   16737 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:12:56.971705   16737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:78:fa:c2:bf:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/disk.qcow2
	I0304 04:12:56.973709   16737 main.go:141] libmachine: STDOUT: 
	I0304 04:12:56.973725   16737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:12:56.973743   16737 client.go:171] LocalClient.Create took 448.273958ms
	W0304 04:12:58.475691   16737 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0304 04:12:58.475799   16737 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0304 04:12:58.527352   16737 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0304 04:12:58.572547   16737 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0304 04:12:58.576929   16737 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0304 04:12:58.578786   16737 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0304 04:12:58.585140   16737 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0304 04:12:58.598861   16737 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0304 04:12:58.721137   16737 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0304 04:12:58.721188   16737 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.223470875s
	I0304 04:12:58.721228   16737 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0304 04:12:58.925109   16737 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0304 04:12:58.925228   16737 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0304 04:12:58.973998   16737 start.go:128] duration metric: createHost completed in 2.475776667s
	I0304 04:12:58.974046   16737 start.go:83] releasing machines lock for "test-preload-438000", held for 2.475907291s
	W0304 04:12:58.974099   16737 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:12:58.983848   16737 out.go:177] * Deleting "test-preload-438000" in qemu2 ...
	W0304 04:12:59.016856   16737 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:12:59.016885   16737 start.go:709] Will try again in 5 seconds ...
	I0304 04:13:00.429284   16737 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0304 04:13:00.429358   16737 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.931519875s
	I0304 04:13:00.429388   16737 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0304 04:13:00.605716   16737 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0304 04:13:00.605767   16737 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.108149959s
	I0304 04:13:00.605829   16737 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0304 04:13:00.662718   16737 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0304 04:13:00.662783   16737 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.164797625s
	I0304 04:13:00.662808   16737 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0304 04:13:02.367722   16737 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0304 04:13:02.367767   16737 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.870159333s
	I0304 04:13:02.367791   16737 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0304 04:13:02.856268   16737 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0304 04:13:02.856313   16737 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.358716s
	I0304 04:13:02.856337   16737 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0304 04:13:03.038216   16737 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0304 04:13:03.038274   16737 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.540392667s
	I0304 04:13:03.038305   16737 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0304 04:13:04.017310   16737 start.go:365] acquiring machines lock for test-preload-438000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:13:04.017671   16737 start.go:369] acquired machines lock for "test-preload-438000" in 291.958µs
	I0304 04:13:04.017784   16737 start.go:93] Provisioning new machine with config: &{Name:test-preload-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-438000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:13:04.018088   16737 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:13:04.027748   16737 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:13:04.076258   16737 start.go:159] libmachine.API.Create for "test-preload-438000" (driver="qemu2")
	I0304 04:13:04.076307   16737 client.go:168] LocalClient.Create starting
	I0304 04:13:04.076406   16737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:13:04.076465   16737 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:04.076489   16737 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:04.076550   16737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:13:04.076590   16737 main.go:141] libmachine: Decoding PEM data...
	I0304 04:13:04.076602   16737 main.go:141] libmachine: Parsing certificate...
	I0304 04:13:04.077130   16737 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:13:04.242277   16737 main.go:141] libmachine: Creating SSH key...
	I0304 04:13:04.281634   16737 main.go:141] libmachine: Creating Disk image...
	I0304 04:13:04.281638   16737 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:13:04.281816   16737 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/disk.qcow2
	I0304 04:13:04.294202   16737 main.go:141] libmachine: STDOUT: 
	I0304 04:13:04.294233   16737 main.go:141] libmachine: STDERR: 
	I0304 04:13:04.294279   16737 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/disk.qcow2 +20000M
	I0304 04:13:04.305214   16737 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:13:04.305233   16737 main.go:141] libmachine: STDERR: 
	I0304 04:13:04.305244   16737 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/disk.qcow2
	I0304 04:13:04.305249   16737 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:13:04.305280   16737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:d0:fc:50:bd:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/test-preload-438000/disk.qcow2
	I0304 04:13:04.307160   16737 main.go:141] libmachine: STDOUT: 
	I0304 04:13:04.307178   16737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:13:04.307190   16737 client.go:171] LocalClient.Create took 230.878875ms
	I0304 04:13:06.307785   16737 start.go:128] duration metric: createHost completed in 2.289652834s
	I0304 04:13:06.307870   16737 start.go:83] releasing machines lock for "test-preload-438000", held for 2.290184709s
	W0304 04:13:06.308196   16737 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-438000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-438000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:13:06.325820   16737 out.go:177] 
	W0304 04:13:06.329905   16737 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:13:06.329942   16737 out.go:239] * 
	* 
	W0304 04:13:06.332576   16737 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:13:06.342797   16737 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-438000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-04 04:13:06.361766 -0800 PST m=+532.957018793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-438000 -n test-preload-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-438000 -n test-preload-438000: exit status 7 (67.438667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-438000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-438000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-438000
--- FAIL: TestPreload (10.16s)

                                                
                                    
x
+
TestScheduledStopUnix (10.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-283000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-283000 --memory=2048 --driver=qemu2 : exit status 80 (9.856752542s)

                                                
                                                
-- stdout --
	* [scheduled-stop-283000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-283000 in cluster scheduled-stop-283000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-283000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-283000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-283000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-283000 in cluster scheduled-stop-283000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-283000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-283000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-04 04:13:16.393737 -0800 PST m=+542.989049001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-283000 -n scheduled-stop-283000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-283000 -n scheduled-stop-283000: exit status 7 (71.109792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-283000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-283000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-283000
--- FAIL: TestScheduledStopUnix (10.04s)

                                                
                                    
x
+
TestSkaffold (16.54s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2177270556 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-142000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-142000 --memory=2600 --driver=qemu2 : exit status 80 (9.906985958s)

                                                
                                                
-- stdout --
	* [skaffold-142000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-142000 in cluster skaffold-142000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-142000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-142000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-142000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-142000 in cluster skaffold-142000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-142000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-142000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-04 04:13:32.930734 -0800 PST m=+559.526143460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-142000 -n skaffold-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-142000 -n skaffold-142000: exit status 7 (63.698291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-142000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-142000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-142000
--- FAIL: TestSkaffold (16.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (658.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1017908331 start -p running-upgrade-156000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1017908331 start -p running-upgrade-156000 --memory=2200 --vm-driver=qemu2 : (1m22.69035325s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-156000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-156000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m56.035814459s)

                                                
                                                
-- stdout --
	* [running-upgrade-156000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting control plane node running-upgrade-156000 in cluster running-upgrade-156000
	* Updating the running qemu2 "running-upgrade-156000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying Kubernetes components...
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:15:44.138223   17177 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:15:44.138347   17177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:15:44.138350   17177 out.go:304] Setting ErrFile to fd 2...
	I0304 04:15:44.138353   17177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:15:44.138477   17177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:15:44.139433   17177 out.go:298] Setting JSON to false
	I0304 04:15:44.157618   17177 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9916,"bootTime":1709544628,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:15:44.157698   17177 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:15:44.162673   17177 out.go:177] * [running-upgrade-156000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:15:44.174546   17177 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:15:44.179444   17177 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:15:44.174580   17177 notify.go:220] Checking for updates...
	I0304 04:15:44.187458   17177 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:15:44.190574   17177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:15:44.193546   17177 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:15:44.196568   17177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:15:44.199892   17177 config.go:182] Loaded profile config "running-upgrade-156000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:15:44.203493   17177 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0304 04:15:44.206528   17177 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:15:44.210421   17177 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:15:44.217513   17177 start.go:299] selected driver: qemu2
	I0304 04:15:44.217521   17177 start.go:903] validating driver "qemu2" against &{Name:running-upgrade-156000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52592 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:run
ning-upgrade-156000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0304 04:15:44.217607   17177 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:15:44.220425   17177 cni.go:84] Creating CNI manager for ""
	I0304 04:15:44.220444   17177 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:15:44.220453   17177 start_flags.go:323] config:
	{Name:running-upgrade-156000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52592 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-156000 Namespace:default APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0304 04:15:44.220557   17177 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:15:44.227470   17177 out.go:177] * Starting control plane node running-upgrade-156000 in cluster running-upgrade-156000
	I0304 04:15:44.231521   17177 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0304 04:15:44.231539   17177 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0304 04:15:44.231551   17177 cache.go:56] Caching tarball of preloaded images
	I0304 04:15:44.231605   17177 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:15:44.231611   17177 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.1 on docker
	I0304 04:15:44.231672   17177 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/config.json ...
	I0304 04:15:44.232184   17177 start.go:365] acquiring machines lock for running-upgrade-156000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:15:44.232227   17177 start.go:369] acquired machines lock for "running-upgrade-156000" in 35.125µs
	I0304 04:15:44.232236   17177 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:15:44.232241   17177 fix.go:54] fixHost starting: 
	I0304 04:15:44.233065   17177 fix.go:102] recreateIfNeeded on running-upgrade-156000: state=Running err=<nil>
	W0304 04:15:44.233074   17177 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:15:44.236557   17177 out.go:177] * Updating the running qemu2 "running-upgrade-156000" VM ...
	I0304 04:15:44.244422   17177 machine.go:88] provisioning docker machine ...
	I0304 04:15:44.244438   17177 buildroot.go:166] provisioning hostname "running-upgrade-156000"
	I0304 04:15:44.244468   17177 main.go:141] libmachine: Using SSH client type: native
	I0304 04:15:44.244594   17177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025f1a30] 0x1025f4290 <nil>  [] 0s} localhost 52560 <nil> <nil>}
	I0304 04:15:44.244601   17177 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-156000 && echo "running-upgrade-156000" | sudo tee /etc/hostname
	I0304 04:15:44.306683   17177 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-156000
	
	I0304 04:15:44.306741   17177 main.go:141] libmachine: Using SSH client type: native
	I0304 04:15:44.306849   17177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025f1a30] 0x1025f4290 <nil>  [] 0s} localhost 52560 <nil> <nil>}
	I0304 04:15:44.306858   17177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-156000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-156000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-156000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0304 04:15:44.363064   17177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0304 04:15:44.363074   17177 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18284-15061/.minikube CaCertPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18284-15061/.minikube}
	I0304 04:15:44.363088   17177 buildroot.go:174] setting up certificates
	I0304 04:15:44.363094   17177 provision.go:83] configureAuth start
	I0304 04:15:44.363098   17177 provision.go:138] copyHostCerts
	I0304 04:15:44.363175   17177 exec_runner.go:144] found /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.pem, removing ...
	I0304 04:15:44.363180   17177 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.pem
	I0304 04:15:44.363296   17177 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.pem (1082 bytes)
	I0304 04:15:44.363479   17177 exec_runner.go:144] found /Users/jenkins/minikube-integration/18284-15061/.minikube/cert.pem, removing ...
	I0304 04:15:44.363483   17177 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18284-15061/.minikube/cert.pem
	I0304 04:15:44.363530   17177 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18284-15061/.minikube/cert.pem (1123 bytes)
	I0304 04:15:44.363633   17177 exec_runner.go:144] found /Users/jenkins/minikube-integration/18284-15061/.minikube/key.pem, removing ...
	I0304 04:15:44.363636   17177 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18284-15061/.minikube/key.pem
	I0304 04:15:44.363678   17177 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18284-15061/.minikube/key.pem (1679 bytes)
	I0304 04:15:44.363771   17177 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-156000 san=[127.0.0.1 localhost localhost 127.0.0.1 minikube running-upgrade-156000]
	I0304 04:15:44.607421   17177 provision.go:172] copyRemoteCerts
	I0304 04:15:44.607474   17177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0304 04:15:44.607494   17177 sshutil.go:53] new ssh client: &{IP:localhost Port:52560 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa Username:docker}
	I0304 04:15:44.638310   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0304 04:15:44.645474   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0304 04:15:44.654962   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0304 04:15:44.661922   17177 provision.go:86] duration metric: configureAuth took 298.820917ms
	I0304 04:15:44.661929   17177 buildroot.go:189] setting minikube options for container-runtime
	I0304 04:15:44.662031   17177 config.go:182] Loaded profile config "running-upgrade-156000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:15:44.662063   17177 main.go:141] libmachine: Using SSH client type: native
	I0304 04:15:44.662149   17177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025f1a30] 0x1025f4290 <nil>  [] 0s} localhost 52560 <nil> <nil>}
	I0304 04:15:44.662154   17177 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0304 04:15:44.718984   17177 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0304 04:15:44.718993   17177 buildroot.go:70] root file system type: tmpfs
	I0304 04:15:44.719045   17177 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0304 04:15:44.719088   17177 main.go:141] libmachine: Using SSH client type: native
	I0304 04:15:44.719186   17177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025f1a30] 0x1025f4290 <nil>  [] 0s} localhost 52560 <nil> <nil>}
	I0304 04:15:44.719220   17177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0304 04:15:44.775388   17177 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0304 04:15:44.775441   17177 main.go:141] libmachine: Using SSH client type: native
	I0304 04:15:44.775545   17177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025f1a30] 0x1025f4290 <nil>  [] 0s} localhost 52560 <nil> <nil>}
	I0304 04:15:44.775552   17177 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0304 04:15:44.831642   17177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0304 04:15:44.831656   17177 machine.go:91] provisioned docker machine in 587.226542ms
	I0304 04:15:44.831661   17177 start.go:300] post-start starting for "running-upgrade-156000" (driver="qemu2")
	I0304 04:15:44.831668   17177 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0304 04:15:44.831712   17177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0304 04:15:44.831721   17177 sshutil.go:53] new ssh client: &{IP:localhost Port:52560 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa Username:docker}
	I0304 04:15:44.860351   17177 ssh_runner.go:195] Run: cat /etc/os-release
	I0304 04:15:44.861660   17177 info.go:137] Remote host: Buildroot 2021.02.12
	I0304 04:15:44.861668   17177 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18284-15061/.minikube/addons for local assets ...
	I0304 04:15:44.861738   17177 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18284-15061/.minikube/files for local assets ...
	I0304 04:15:44.861855   17177 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/154862.pem -> 154862.pem in /etc/ssl/certs
	I0304 04:15:44.861974   17177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0304 04:15:44.864476   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/154862.pem --> /etc/ssl/certs/154862.pem (1708 bytes)
	I0304 04:15:44.871336   17177 start.go:303] post-start completed in 39.669458ms
	I0304 04:15:44.871342   17177 fix.go:56] fixHost completed within 639.105625ms
	I0304 04:15:44.871381   17177 main.go:141] libmachine: Using SSH client type: native
	I0304 04:15:44.871473   17177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1025f1a30] 0x1025f4290 <nil>  [] 0s} localhost 52560 <nil> <nil>}
	I0304 04:15:44.871477   17177 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0304 04:15:44.924155   17177 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709554544.919906558
	
	I0304 04:15:44.924163   17177 fix.go:206] guest clock: 1709554544.919906558
	I0304 04:15:44.924166   17177 fix.go:219] Guest: 2024-03-04 04:15:44.919906558 -0800 PST Remote: 2024-03-04 04:15:44.871344 -0800 PST m=+0.755265126 (delta=48.562558ms)
	I0304 04:15:44.924178   17177 fix.go:190] guest clock delta is within tolerance: 48.562558ms
	I0304 04:15:44.924181   17177 start.go:83] releasing machines lock for "running-upgrade-156000", held for 691.953459ms
	I0304 04:15:44.924235   17177 ssh_runner.go:195] Run: cat /version.json
	I0304 04:15:44.924238   17177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0304 04:15:44.924242   17177 sshutil.go:53] new ssh client: &{IP:localhost Port:52560 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa Username:docker}
	I0304 04:15:44.924253   17177 sshutil.go:53] new ssh client: &{IP:localhost Port:52560 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa Username:docker}
	W0304 04:15:44.924844   17177 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52560: connect: connection refused
	I0304 04:15:44.924875   17177 retry.go:31] will retry after 141.36464ms: dial tcp [::1]:52560: connect: connection refused
	W0304 04:15:44.951160   17177 start.go:420] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0304 04:15:44.951207   17177 ssh_runner.go:195] Run: systemctl --version
	I0304 04:15:44.952836   17177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0304 04:15:44.954621   17177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0304 04:15:44.954644   17177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0304 04:15:44.957698   17177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0304 04:15:44.962205   17177 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0304 04:15:44.962214   17177 start.go:475] detecting cgroup driver to use...
	I0304 04:15:44.962309   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0304 04:15:44.967216   17177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0304 04:15:44.970142   17177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0304 04:15:44.973595   17177 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0304 04:15:44.973624   17177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0304 04:15:44.976712   17177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0304 04:15:44.979525   17177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0304 04:15:44.982266   17177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0304 04:15:44.984996   17177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0304 04:15:44.988094   17177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0304 04:15:44.990856   17177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0304 04:15:44.993718   17177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0304 04:15:44.996691   17177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:15:45.088894   17177 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0304 04:15:45.095702   17177 start.go:475] detecting cgroup driver to use...
	I0304 04:15:45.095761   17177 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0304 04:15:45.105089   17177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0304 04:15:45.184729   17177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0304 04:15:45.206434   17177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0304 04:15:45.211502   17177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0304 04:15:45.215992   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0304 04:15:45.221255   17177 ssh_runner.go:195] Run: which cri-dockerd
	I0304 04:15:45.222479   17177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0304 04:15:45.225393   17177 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0304 04:15:45.230065   17177 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0304 04:15:45.332938   17177 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0304 04:15:45.408564   17177 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0304 04:15:45.408627   17177 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0304 04:15:45.413886   17177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:15:45.504259   17177 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0304 04:15:48.138124   17177 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.633864208s)
	I0304 04:15:48.138204   17177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0304 04:15:48.142679   17177 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0304 04:15:48.148635   17177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0304 04:15:48.153551   17177 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0304 04:15:48.234179   17177 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0304 04:15:48.313776   17177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:15:48.391474   17177 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0304 04:15:48.397195   17177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0304 04:15:48.401480   17177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:15:48.465217   17177 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0304 04:15:48.511518   17177 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0304 04:15:48.511603   17177 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0304 04:15:48.513942   17177 start.go:543] Will wait 60s for crictl version
	I0304 04:15:48.513984   17177 ssh_runner.go:195] Run: which crictl
	I0304 04:15:48.515401   17177 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0304 04:15:48.527040   17177 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0304 04:15:48.527101   17177 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0304 04:15:48.539905   17177 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0304 04:15:48.560017   17177 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0304 04:15:48.560149   17177 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0304 04:15:48.561524   17177 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0304 04:15:48.561563   17177 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0304 04:15:48.577582   17177 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0304 04:15:48.577592   17177 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0304 04:15:48.577648   17177 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0304 04:15:48.581313   17177 ssh_runner.go:195] Run: which lz4
	I0304 04:15:48.582525   17177 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0304 04:15:48.583673   17177 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0304 04:15:48.583683   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0304 04:15:49.340012   17177 docker.go:649] Took 0.757509 seconds to copy over tarball
	I0304 04:15:49.340073   17177 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0304 04:15:51.038665   17177 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.69858975s)
	I0304 04:15:51.038680   17177 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0304 04:15:51.054581   17177 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0304 04:15:51.057891   17177 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0304 04:15:51.062985   17177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:15:51.142013   17177 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0304 04:15:52.700318   17177 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.558300542s)
	I0304 04:15:52.700416   17177 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0304 04:15:52.713070   17177 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0304 04:15:52.713081   17177 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0304 04:15:52.713085   17177 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0304 04:15:52.719440   17177 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:15:52.719462   17177 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:15:52.719477   17177 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:15:52.719518   17177 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0304 04:15:52.719629   17177 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:15:52.719630   17177 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0304 04:15:52.719773   17177 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:15:52.719820   17177 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:15:52.727241   17177 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:15:52.727366   17177 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0304 04:15:52.727383   17177 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:15:52.727694   17177 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:15:52.728116   17177 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:15:52.728211   17177 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0304 04:15:52.728223   17177 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:15:52.728282   17177 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:15:54.818639   17177 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:15:54.836834   17177 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0304 04:15:54.836892   17177 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:15:54.836970   17177 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:15:54.850981   17177 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0304 04:15:54.892014   17177 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:15:54.905375   17177 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0304 04:15:54.905395   17177 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:15:54.905452   17177 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:15:54.918297   17177 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0304 04:15:54.918570   17177 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0304 04:15:54.918690   17177 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:15:54.928879   17177 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0304 04:15:54.928898   17177 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:15:54.928959   17177 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:15:54.938535   17177 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0304 04:15:54.938642   17177 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0304 04:15:54.940283   17177 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0304 04:15:54.940295   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0304 04:15:54.941433   17177 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:15:54.952173   17177 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0304 04:15:54.954685   17177 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0304 04:15:54.954708   17177 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:15:54.954749   17177 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:15:54.956650   17177 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0304 04:15:54.966779   17177 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:15:54.996360   17177 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0304 04:15:54.996387   17177 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0304 04:15:54.996443   17177 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0304 04:15:54.996944   17177 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0304 04:15:54.996954   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0304 04:15:55.002874   17177 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0304 04:15:55.005162   17177 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0304 04:15:55.005186   17177 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0304 04:15:55.005206   17177 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0304 04:15:55.005218   17177 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:15:55.005236   17177 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0304 04:15:55.005244   17177 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:15:55.024938   17177 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0304 04:15:55.025065   17177 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0304 04:15:55.052184   17177 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0304 04:15:55.052238   17177 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0304 04:15:55.052266   17177 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0304 04:15:55.052279   17177 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0304 04:15:55.052295   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0304 04:15:55.059600   17177 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0304 04:15:55.059612   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0304 04:15:55.088531   17177 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0304 04:15:55.467429   17177 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0304 04:15:55.468002   17177 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:15:55.506578   17177 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0304 04:15:55.506614   17177 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:15:55.506704   17177 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:15:56.389706   17177 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0304 04:15:56.390139   17177 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0304 04:15:56.399219   17177 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0304 04:15:56.399305   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0304 04:15:56.454293   17177 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0304 04:15:56.454311   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0304 04:15:56.684763   17177 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0304 04:15:56.684805   17177 cache_images.go:92] LoadImages completed in 3.971737334s
	W0304 04:15:56.684846   17177 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0304 04:15:56.684936   17177 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0304 04:15:56.702608   17177 cni.go:84] Creating CNI manager for ""
	I0304 04:15:56.702619   17177 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:15:56.702632   17177 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0304 04:15:56.702641   17177 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-156000 NodeName:running-upgrade-156000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0304 04:15:56.702724   17177 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-156000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0304 04:15:56.702757   17177 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-156000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-156000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0304 04:15:56.702811   17177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0304 04:15:56.705866   17177 binaries.go:44] Found k8s binaries, skipping transfer
	I0304 04:15:56.705897   17177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0304 04:15:56.708612   17177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0304 04:15:56.713843   17177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0304 04:15:56.718538   17177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0304 04:15:56.724109   17177 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0304 04:15:56.725607   17177 certs.go:56] Setting up /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000 for IP: 10.0.2.15
	I0304 04:15:56.725617   17177 certs.go:190] acquiring lock for shared ca certs: {Name:mk261f788a3b9cd088f9e587f9da53d875f26951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:15:56.725832   17177 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.key
	I0304 04:15:56.725876   17177 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/proxy-client-ca.key
	I0304 04:15:56.725923   17177 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/client.key
	I0304 04:15:56.725977   17177 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/apiserver.key.49504c3e
	I0304 04:15:56.726026   17177 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/proxy-client.key
	I0304 04:15:56.726162   17177 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/15486.pem (1338 bytes)
	W0304 04:15:56.726189   17177 certs.go:433] ignoring /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/15486_empty.pem, impossibly tiny 0 bytes
	I0304 04:15:56.726195   17177 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca-key.pem (1675 bytes)
	I0304 04:15:56.726224   17177 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem (1082 bytes)
	I0304 04:15:56.726242   17177 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem (1123 bytes)
	I0304 04:15:56.726258   17177 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/key.pem (1679 bytes)
	I0304 04:15:56.726298   17177 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/154862.pem (1708 bytes)
	I0304 04:15:56.726652   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0304 04:15:56.733950   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0304 04:15:56.740645   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0304 04:15:56.747971   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0304 04:15:56.755425   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0304 04:15:56.762690   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0304 04:15:56.769574   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0304 04:15:56.776290   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0304 04:15:56.783842   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/15486.pem --> /usr/share/ca-certificates/15486.pem (1338 bytes)
	I0304 04:15:56.790884   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/154862.pem --> /usr/share/ca-certificates/154862.pem (1708 bytes)
	I0304 04:15:56.797693   17177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0304 04:15:56.804254   17177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0304 04:15:56.809432   17177 ssh_runner.go:195] Run: openssl version
	I0304 04:15:56.811206   17177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15486.pem && ln -fs /usr/share/ca-certificates/15486.pem /etc/ssl/certs/15486.pem"
	I0304 04:15:56.814209   17177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15486.pem
	I0304 04:15:56.815658   17177 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Mar  4 12:05 /usr/share/ca-certificates/15486.pem
	I0304 04:15:56.815683   17177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15486.pem
	I0304 04:15:56.817669   17177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15486.pem /etc/ssl/certs/51391683.0"
	I0304 04:15:56.820442   17177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154862.pem && ln -fs /usr/share/ca-certificates/154862.pem /etc/ssl/certs/154862.pem"
	I0304 04:15:56.823893   17177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154862.pem
	I0304 04:15:56.825434   17177 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Mar  4 12:05 /usr/share/ca-certificates/154862.pem
	I0304 04:15:56.825455   17177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154862.pem
	I0304 04:15:56.827283   17177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154862.pem /etc/ssl/certs/3ec20f2e.0"
	I0304 04:15:56.830814   17177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0304 04:15:56.833793   17177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0304 04:15:56.835227   17177 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Mar  4 12:15 /usr/share/ca-certificates/minikubeCA.pem
	I0304 04:15:56.835251   17177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0304 04:15:56.837055   17177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0304 04:15:56.840100   17177 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0304 04:15:56.841451   17177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0304 04:15:56.843152   17177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0304 04:15:56.844915   17177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0304 04:15:56.846610   17177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0304 04:15:56.848450   17177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0304 04:15:56.850355   17177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0304 04:15:56.852030   17177 kubeadm.go:404] StartCluster: {Name:running-upgrade-156000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52592 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 Clus
terName:running-upgrade-156000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0304 04:15:56.852096   17177 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0304 04:15:56.862365   17177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0304 04:15:56.865626   17177 host.go:66] Checking if "running-upgrade-156000" exists ...
	I0304 04:15:56.866404   17177 main.go:141] libmachine: Using SSH client type: external
	I0304 04:15:56.866425   17177 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa (-rw-------)
	I0304 04:15:56.866441   17177 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa -p 52560] /usr/bin/ssh <nil>}
	I0304 04:15:56.866453   17177 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa -p 52560 -f -NTL 52592:localhost:8443
	I0304 04:15:56.904236   17177 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0304 04:15:56.904370   17177 kubeadm.go:636] restartCluster start
	I0304 04:15:56.904431   17177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0304 04:15:56.908473   17177 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0304 04:15:56.908521   17177 kubeconfig.go:135] verify returned: extract IP: "running-upgrade-156000" does not appear in /Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:15:56.908538   17177 kubeconfig.go:146] "running-upgrade-156000" context is missing from /Users/jenkins/minikube-integration/18284-15061/kubeconfig - will repair!
	I0304 04:15:56.908747   17177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/kubeconfig: {Name:mkd9e78edd5ce89511d1f03c76ad35ee3697edbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:15:56.909389   17177 kapi.go:59] client config for running-upgrade-156000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/client.key", CAFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1038e77d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0304 04:15:56.910247   17177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0304 04:15:56.913007   17177 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-156000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0304 04:15:56.913012   17177 kubeadm.go:1135] stopping kube-system containers ...
	I0304 04:15:56.913048   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0304 04:15:56.924168   17177 docker.go:483] Stopping containers: [ae6a4dff9a94 718fed285911 e4edeb2f2dd6 4a0c8fe8aa8d e9016c04a2c2 b74f2799ab14 9023f5db92ba 2ac696c7e05f 0ff2e201c60b 42712b1ea980 440178f351a5 6769c1478bde 308679dce2d6 c6d26b54f1d0]
	I0304 04:15:56.924241   17177 ssh_runner.go:195] Run: docker stop ae6a4dff9a94 718fed285911 e4edeb2f2dd6 4a0c8fe8aa8d e9016c04a2c2 b74f2799ab14 9023f5db92ba 2ac696c7e05f 0ff2e201c60b 42712b1ea980 440178f351a5 6769c1478bde 308679dce2d6 c6d26b54f1d0
	I0304 04:15:56.935516   17177 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0304 04:15:57.030782   17177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0304 04:15:57.035131   17177 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar  4 12:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Mar  4 12:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar  4 12:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Mar  4 12:15 /etc/kubernetes/scheduler.conf
	
	I0304 04:15:57.035171   17177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0304 04:15:57.039065   17177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0304 04:15:57.042638   17177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0304 04:15:57.045880   17177 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0304 04:15:57.045908   17177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0304 04:15:57.048912   17177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0304 04:15:57.051846   17177 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0304 04:15:57.051869   17177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0304 04:15:57.054906   17177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0304 04:15:57.057724   17177 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0304 04:15:57.057730   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:15:57.104154   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:15:57.434017   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:15:57.640632   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:15:57.667500   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:15:57.695074   17177 api_server.go:52] waiting for apiserver process to appear ...
	I0304 04:15:57.695151   17177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:15:58.197195   17177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:15:58.697215   17177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:15:59.197222   17177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:15:59.201707   17177 api_server.go:72] duration metric: took 1.506644334s to wait for apiserver process to appear ...
	I0304 04:15:59.201717   17177 api_server.go:88] waiting for apiserver healthz status ...
	I0304 04:15:59.201742   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:16:04.203885   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:16:04.203924   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:16:09.204287   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:16:09.204344   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:16:14.204871   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:16:14.204911   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:16:19.205563   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:16:19.205590   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:16:24.206791   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:16:24.206816   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:16:29.207883   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:16:29.207962   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:16:34.209075   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:16:34.209122   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:16:39.211006   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:16:39.211083   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:16:44.212695   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:16:44.212737   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:16:49.215083   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:16:49.215168   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:16:54.217888   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:16:54.217966   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:16:59.220718   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:16:59.221232   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:16:59.260909   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:16:59.261074   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:16:59.286412   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:16:59.286535   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:16:59.300546   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:16:59.300624   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:16:59.312851   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:16:59.312926   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:16:59.323200   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:16:59.323261   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:16:59.333681   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:16:59.333740   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:16:59.343772   17177 logs.go:276] 0 containers: []
	W0304 04:16:59.343785   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:16:59.343846   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:16:59.355085   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:16:59.355101   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:16:59.355107   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:16:59.373080   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:16:59.373093   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:16:59.386476   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:16:59.386489   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:16:59.412105   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:16:59.412114   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:16:59.423296   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:16:59.423311   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:16:59.434915   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:16:59.434926   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:16:59.474379   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:16:59.474387   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:16:59.543844   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:16:59.543859   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:16:59.561663   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:16:59.561677   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:16:59.575507   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:16:59.575521   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:16:59.596180   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:16:59.596189   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:16:59.611234   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:16:59.611248   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:16:59.625581   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:16:59.625592   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:16:59.629917   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:16:59.629925   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:16:59.652208   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:16:59.652222   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:16:59.674047   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:16:59.674058   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:17:02.188049   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:17:07.190742   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:17:07.191151   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:17:07.230502   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:17:07.230634   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:17:07.254369   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:17:07.254489   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:17:07.271341   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:17:07.271405   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:17:07.283714   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:17:07.283774   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:17:07.294011   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:17:07.294080   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:17:07.304540   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:17:07.304604   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:17:07.314275   17177 logs.go:276] 0 containers: []
	W0304 04:17:07.314292   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:17:07.314360   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:17:07.324552   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:17:07.324569   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:17:07.324574   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:17:07.336060   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:17:07.336069   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:17:07.371801   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:17:07.371812   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:17:07.385726   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:17:07.385739   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:17:07.396433   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:17:07.396443   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:17:07.410464   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:17:07.410473   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:17:07.427945   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:17:07.427956   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:17:07.446600   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:17:07.446615   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:17:07.458087   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:17:07.458100   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:17:07.483307   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:17:07.483315   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:17:07.523453   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:17:07.523460   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:17:07.542384   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:17:07.542395   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:17:07.555849   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:17:07.555860   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:17:07.560351   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:17:07.560359   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:17:07.574437   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:17:07.574447   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:17:07.591762   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:17:07.591775   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:17:10.105052   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:17:15.107774   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:17:15.108209   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:17:15.145140   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:17:15.145268   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:17:15.166111   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:17:15.166189   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:17:15.182491   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:17:15.182554   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:17:15.196119   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:17:15.196191   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:17:15.207283   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:17:15.207352   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:17:15.218148   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:17:15.218220   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:17:15.228186   17177 logs.go:276] 0 containers: []
	W0304 04:17:15.228196   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:17:15.228246   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:17:15.239069   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:17:15.239093   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:17:15.239100   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:17:15.277197   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:17:15.277211   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:17:15.291580   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:17:15.291591   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:17:15.305587   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:17:15.305596   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:17:15.309614   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:17:15.309619   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:17:15.320537   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:17:15.320546   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:17:15.332411   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:17:15.332424   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:17:15.343894   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:17:15.343906   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:17:15.382207   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:17:15.382215   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:17:15.400011   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:17:15.400019   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:17:15.418563   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:17:15.418591   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:17:15.429650   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:17:15.429661   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:17:15.441973   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:17:15.441984   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:17:15.459911   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:17:15.459924   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:17:15.477250   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:17:15.477260   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:17:15.488756   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:17:15.488767   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:17:18.016293   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:17:23.018691   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:17:23.018996   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:17:23.048550   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:17:23.048695   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:17:23.067767   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:17:23.067852   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:17:23.083613   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:17:23.083689   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:17:23.095515   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:17:23.095590   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:17:23.106489   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:17:23.106556   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:17:23.116989   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:17:23.117056   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:17:23.126796   17177 logs.go:276] 0 containers: []
	W0304 04:17:23.126805   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:17:23.126874   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:17:23.137405   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:17:23.137420   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:17:23.137425   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:17:23.148400   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:17:23.148410   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:17:23.162194   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:17:23.162208   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:17:23.175014   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:17:23.175027   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:17:23.188581   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:17:23.188591   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:17:23.226429   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:17:23.226443   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:17:23.238440   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:17:23.238451   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:17:23.261762   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:17:23.261773   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:17:23.287425   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:17:23.287436   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:17:23.325773   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:17:23.325782   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:17:23.329634   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:17:23.329643   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:17:23.343855   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:17:23.343866   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:17:23.362906   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:17:23.362920   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:17:23.378575   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:17:23.378590   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:17:23.389621   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:17:23.389631   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:17:23.402000   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:17:23.402013   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:17:25.927561   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:17:30.929632   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:17:30.929967   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:17:30.962842   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:17:30.962958   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:17:30.986944   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:17:30.987026   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:17:31.000401   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:17:31.000473   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:17:31.011720   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:17:31.011782   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:17:31.022233   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:17:31.022290   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:17:31.032252   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:17:31.032323   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:17:31.043073   17177 logs.go:276] 0 containers: []
	W0304 04:17:31.043084   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:17:31.043140   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:17:31.053244   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:17:31.053260   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:17:31.053266   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:17:31.092906   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:17:31.092918   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:17:31.113704   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:17:31.113713   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:17:31.125277   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:17:31.125288   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:17:31.142833   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:17:31.142845   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:17:31.184033   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:17:31.184043   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:17:31.198477   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:17:31.198488   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:17:31.217278   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:17:31.217290   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:17:31.228974   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:17:31.228985   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:17:31.240635   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:17:31.240647   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:17:31.259560   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:17:31.259571   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:17:31.271642   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:17:31.271656   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:17:31.296233   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:17:31.296245   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:17:31.300439   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:17:31.300447   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:17:31.315933   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:17:31.315942   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:17:31.327879   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:17:31.327889   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:17:33.842939   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:17:38.845486   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:17:38.845901   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:17:38.886782   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:17:38.886904   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:17:38.909482   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:17:38.909606   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:17:38.933019   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:17:38.933098   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:17:38.944527   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:17:38.944611   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:17:38.955018   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:17:38.955078   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:17:38.965621   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:17:38.965681   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:17:38.982621   17177 logs.go:276] 0 containers: []
	W0304 04:17:38.982632   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:17:38.982707   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:17:38.993060   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:17:38.993084   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:17:38.993090   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:17:39.028895   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:17:39.028907   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:17:39.043810   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:17:39.043824   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:17:39.062512   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:17:39.062524   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:17:39.076216   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:17:39.076235   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:17:39.095399   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:17:39.095412   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:17:39.113460   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:17:39.113471   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:17:39.124701   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:17:39.124713   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:17:39.162491   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:17:39.162499   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:17:39.166988   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:17:39.166994   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:17:39.180470   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:17:39.180481   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:17:39.196134   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:17:39.196147   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:17:39.211851   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:17:39.211865   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:17:39.225655   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:17:39.225666   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:17:39.237650   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:17:39.237660   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:17:39.262093   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:17:39.262106   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:17:41.776713   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:17:46.778981   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:17:46.779170   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:17:46.790656   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:17:46.790727   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:17:46.808272   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:17:46.808344   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:17:46.819153   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:17:46.819222   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:17:46.829789   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:17:46.829859   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:17:46.840100   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:17:46.840170   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:17:46.851395   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:17:46.851467   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:17:46.861605   17177 logs.go:276] 0 containers: []
	W0304 04:17:46.861618   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:17:46.861675   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:17:46.872682   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:17:46.872699   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:17:46.872704   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:17:46.877020   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:17:46.877029   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:17:46.889603   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:17:46.889613   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:17:46.901317   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:17:46.901328   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:17:46.915421   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:17:46.915433   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:17:46.934425   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:17:46.934435   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:17:46.945947   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:17:46.945958   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:17:46.958090   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:17:46.958103   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:17:46.994477   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:17:46.994492   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:17:47.031368   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:17:47.031382   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:17:47.046817   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:17:47.046828   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:17:47.059075   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:17:47.059087   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:17:47.076393   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:17:47.076405   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:17:47.101105   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:17:47.101113   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:17:47.113216   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:17:47.113227   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:17:47.153720   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:17:47.153728   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:17:49.675008   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:17:54.675871   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:17:54.676291   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:17:54.716309   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:17:54.716449   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:17:54.739154   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:17:54.739290   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:17:54.757303   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:17:54.757387   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:17:54.770857   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:17:54.770932   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:17:54.781059   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:17:54.781128   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:17:54.791451   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:17:54.791516   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:17:54.801431   17177 logs.go:276] 0 containers: []
	W0304 04:17:54.801441   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:17:54.801500   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:17:54.812057   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:17:54.812075   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:17:54.812083   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:17:54.816486   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:17:54.816494   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:17:54.827896   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:17:54.827906   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:17:54.839409   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:17:54.839419   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:17:54.851126   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:17:54.851135   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:17:54.870355   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:17:54.870365   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:17:54.908655   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:17:54.908665   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:17:54.923116   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:17:54.923130   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:17:54.940862   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:17:54.940871   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:17:54.954531   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:17:54.954542   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:17:54.992366   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:17:54.992381   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:17:55.006748   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:17:55.006760   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:17:55.018489   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:17:55.018499   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:17:55.042405   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:17:55.042413   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:17:55.065343   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:17:55.065357   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:17:55.084508   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:17:55.084520   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:17:57.602352   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:18:02.604605   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:18:02.604751   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:18:02.623513   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:18:02.623608   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:18:02.635399   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:18:02.635469   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:18:02.650168   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:18:02.650252   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:18:02.662307   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:18:02.662393   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:18:02.673951   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:18:02.674028   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:18:02.685073   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:18:02.685153   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:18:02.696199   17177 logs.go:276] 0 containers: []
	W0304 04:18:02.696213   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:18:02.696288   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:18:02.706939   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:18:02.706961   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:18:02.706966   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:18:02.725553   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:18:02.725569   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:18:02.742764   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:18:02.742775   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:18:02.766208   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:18:02.766219   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:18:02.778348   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:18:02.778362   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:18:02.790215   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:18:02.790227   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:18:02.815728   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:18:02.815736   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:18:02.855600   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:18:02.855614   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:18:02.869497   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:18:02.869508   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:18:02.883774   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:18:02.883786   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:18:02.895824   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:18:02.895838   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:18:02.908062   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:18:02.908074   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:18:02.946788   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:18:02.946798   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:18:02.951473   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:18:02.951480   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:18:02.981594   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:18:02.981605   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:18:02.993302   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:18:02.993316   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:18:05.509323   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:18:10.511710   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:18:10.511956   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:18:10.541395   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:18:10.541510   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:18:10.558797   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:18:10.558891   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:18:10.572037   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:18:10.572114   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:18:10.584224   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:18:10.584303   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:18:10.594761   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:18:10.594830   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:18:10.606128   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:18:10.606198   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:18:10.628805   17177 logs.go:276] 0 containers: []
	W0304 04:18:10.628817   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:18:10.628883   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:18:10.639465   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:18:10.639482   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:18:10.639491   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:18:10.643966   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:18:10.643975   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:18:10.659406   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:18:10.659419   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:18:10.670516   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:18:10.670528   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:18:10.684228   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:18:10.684240   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:18:10.719997   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:18:10.720011   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:18:10.731833   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:18:10.731845   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:18:10.749631   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:18:10.749640   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:18:10.775214   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:18:10.775225   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:18:10.815010   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:18:10.815019   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:18:10.834107   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:18:10.834119   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:18:10.847868   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:18:10.847882   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:18:10.859504   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:18:10.859516   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:18:10.876405   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:18:10.876415   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:18:10.887998   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:18:10.888009   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:18:10.899850   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:18:10.899861   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:18:13.414695   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:18:18.416886   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:18:18.417043   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:18:18.428582   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:18:18.428653   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:18:18.439219   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:18:18.439284   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:18:18.449941   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:18:18.450012   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:18:18.460270   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:18:18.460345   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:18:18.471180   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:18:18.471249   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:18:18.482008   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:18:18.482076   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:18:18.492234   17177 logs.go:276] 0 containers: []
	W0304 04:18:18.492248   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:18:18.492305   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:18:18.502819   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:18:18.502836   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:18:18.502842   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:18:18.525064   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:18:18.525076   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:18:18.538705   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:18:18.538719   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:18:18.550921   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:18:18.550933   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:18:18.562457   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:18:18.562484   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:18:18.581800   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:18:18.581817   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:18:18.599754   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:18:18.599766   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:18:18.611374   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:18:18.611386   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:18:18.634382   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:18:18.634393   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:18:18.673705   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:18:18.673715   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:18:18.711101   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:18:18.711115   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:18:18.725115   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:18:18.725126   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:18:18.749980   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:18:18.749991   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:18:18.763751   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:18:18.763766   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:18:18.775600   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:18:18.775615   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:18:18.780379   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:18:18.780387   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:18:21.297083   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:18:26.299373   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:18:26.299598   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:18:26.321930   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:18:26.322047   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:18:26.336413   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:18:26.336495   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:18:26.352924   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:18:26.352995   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:18:26.363427   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:18:26.363502   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:18:26.374556   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:18:26.374625   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:18:26.385046   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:18:26.385109   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:18:26.395242   17177 logs.go:276] 0 containers: []
	W0304 04:18:26.395256   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:18:26.395320   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:18:26.405667   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:18:26.405689   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:18:26.405694   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:18:26.430671   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:18:26.430679   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:18:26.471305   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:18:26.471316   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:18:26.485859   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:18:26.485870   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:18:26.499899   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:18:26.499910   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:18:26.511604   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:18:26.511617   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:18:26.523485   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:18:26.523498   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:18:26.542103   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:18:26.542114   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:18:26.565012   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:18:26.565029   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:18:26.586415   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:18:26.586429   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:18:26.598903   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:18:26.598915   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:18:26.603878   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:18:26.603885   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:18:26.640509   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:18:26.640520   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:18:26.652757   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:18:26.652767   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:18:26.664054   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:18:26.664066   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:18:26.682145   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:18:26.682155   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:18:29.200943   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:18:34.202971   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:18:34.203316   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:18:34.238201   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:18:34.238307   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:18:34.259506   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:18:34.259589   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:18:34.275179   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:18:34.275243   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:18:34.287630   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:18:34.287685   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:18:34.298794   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:18:34.298864   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:18:34.309636   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:18:34.309696   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:18:34.319610   17177 logs.go:276] 0 containers: []
	W0304 04:18:34.319620   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:18:34.319663   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:18:34.329889   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:18:34.329904   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:18:34.329909   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:18:34.350616   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:18:34.350624   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:18:34.365133   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:18:34.365142   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:18:34.383571   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:18:34.383582   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:18:34.395528   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:18:34.395540   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:18:34.409843   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:18:34.409866   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:18:34.423061   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:18:34.423074   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:18:34.438792   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:18:34.438804   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:18:34.450932   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:18:34.450942   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:18:34.463163   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:18:34.463175   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:18:34.481202   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:18:34.481212   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:18:34.493263   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:18:34.493273   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:18:34.521737   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:18:34.521756   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:18:34.563744   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:18:34.563760   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:18:34.568443   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:18:34.568452   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:18:34.607133   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:18:34.607146   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:18:37.127073   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:18:42.129502   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:18:42.129631   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:18:42.141890   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:18:42.141967   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:18:42.153412   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:18:42.153491   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:18:42.164210   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:18:42.164275   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:18:42.174833   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:18:42.174907   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:18:42.185826   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:18:42.185920   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:18:42.196780   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:18:42.196845   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:18:42.206556   17177 logs.go:276] 0 containers: []
	W0304 04:18:42.206566   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:18:42.206615   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:18:42.218877   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:18:42.218894   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:18:42.218899   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:18:42.236441   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:18:42.236457   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:18:42.250815   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:18:42.250829   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:18:42.292175   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:18:42.292183   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:18:42.304134   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:18:42.304143   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:18:42.329301   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:18:42.329308   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:18:42.341432   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:18:42.341442   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:18:42.353769   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:18:42.353781   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:18:42.391918   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:18:42.391929   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:18:42.411148   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:18:42.411157   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:18:42.423935   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:18:42.423951   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:18:42.428601   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:18:42.428608   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:18:42.451081   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:18:42.451092   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:18:42.463882   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:18:42.463893   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:18:42.476191   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:18:42.476202   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:18:42.494000   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:18:42.494011   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:18:45.010244   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:18:50.012519   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:18:50.012969   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:18:50.049910   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:18:50.050052   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:18:50.077131   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:18:50.077222   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:18:50.090914   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:18:50.090988   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:18:50.102238   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:18:50.102309   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:18:50.113103   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:18:50.113170   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:18:50.123648   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:18:50.123710   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:18:50.133884   17177 logs.go:276] 0 containers: []
	W0304 04:18:50.133895   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:18:50.133953   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:18:50.144484   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:18:50.144500   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:18:50.144506   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:18:50.162414   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:18:50.162427   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:18:50.174685   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:18:50.174696   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:18:50.186438   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:18:50.186452   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:18:50.222195   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:18:50.222209   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:18:50.236778   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:18:50.236789   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:18:50.254815   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:18:50.254829   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:18:50.279635   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:18:50.279642   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:18:50.293436   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:18:50.293447   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:18:50.304983   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:18:50.304995   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:18:50.318994   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:18:50.319008   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:18:50.337993   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:18:50.338002   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:18:50.349775   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:18:50.349785   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:18:50.362456   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:18:50.362469   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:18:50.400403   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:18:50.400409   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:18:50.404450   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:18:50.404460   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:18:52.919577   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:18:57.922280   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:18:57.922402   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:18:57.933372   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:18:57.933444   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:18:57.944758   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:18:57.944830   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:18:57.955180   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:18:57.955257   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:18:57.966538   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:18:57.966610   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:18:57.978094   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:18:57.978166   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:18:57.993718   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:18:57.993785   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:18:58.004748   17177 logs.go:276] 0 containers: []
	W0304 04:18:58.004761   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:18:58.004817   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:18:58.016316   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:18:58.016332   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:18:58.016337   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:18:58.033836   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:18:58.033848   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:18:58.049143   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:18:58.049152   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:18:58.076073   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:18:58.076096   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:18:58.116201   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:18:58.116212   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:18:58.132094   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:18:58.132108   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:18:58.146255   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:18:58.146274   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:18:58.186005   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:18:58.186018   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:18:58.202639   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:18:58.202652   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:18:58.217804   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:18:58.217815   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:18:58.236703   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:18:58.236717   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:18:58.249740   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:18:58.249755   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:18:58.264324   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:18:58.264336   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:18:58.268941   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:18:58.268953   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:18:58.290004   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:18:58.290017   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:18:58.303091   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:18:58.303103   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:00.825115   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:05.827666   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:05.827773   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:05.839799   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:05.839876   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:05.850442   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:05.850512   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:05.861480   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:05.861549   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:05.871807   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:05.871873   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:05.882161   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:05.882226   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:05.897275   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:05.897348   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:05.907721   17177 logs.go:276] 0 containers: []
	W0304 04:19:05.907732   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:05.907786   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:05.918251   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:05.918284   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:05.918292   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:05.956640   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:05.956650   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:05.973767   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:05.973776   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:05.984774   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:05.984784   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:06.009270   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:06.009284   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:06.050911   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:06.050922   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:06.066750   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:06.066759   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:06.081713   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:06.081724   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:06.093080   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:06.093093   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:06.104861   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:06.104874   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:06.116969   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:06.116983   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:06.130636   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:06.130647   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:06.151144   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:06.151158   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:06.169825   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:06.169843   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:06.187065   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:06.187075   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:06.191632   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:06.191641   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:08.712132   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:13.714299   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:13.714445   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:13.728103   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:13.728176   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:13.740314   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:13.740386   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:13.751047   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:13.751105   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:13.761721   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:13.761792   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:13.774193   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:13.774266   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:13.784687   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:13.784754   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:13.795416   17177 logs.go:276] 0 containers: []
	W0304 04:19:13.795427   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:13.795484   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:13.806010   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:13.806027   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:13.806033   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:13.818325   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:13.818339   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:13.857162   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:13.857172   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:13.869198   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:13.869211   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:13.883136   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:13.883147   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:13.895572   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:13.895581   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:13.914013   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:13.914023   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:13.931247   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:13.931256   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:13.942167   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:13.942179   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:13.978078   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:13.978087   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:13.992165   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:13.992174   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:14.006064   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:14.006073   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:14.019888   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:14.019899   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:14.024172   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:14.024180   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:14.041755   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:14.041766   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:14.057034   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:14.057045   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:16.582306   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:21.585012   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:21.585478   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:21.633235   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:21.633353   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:21.652134   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:21.652235   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:21.666024   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:21.666097   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:21.678126   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:21.678209   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:21.692824   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:21.692895   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:21.723211   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:21.723294   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:21.753298   17177 logs.go:276] 0 containers: []
	W0304 04:19:21.753312   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:21.753375   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:21.763745   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:21.763761   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:21.763767   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:21.778110   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:21.778123   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:21.795417   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:21.795429   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:21.813587   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:21.813597   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:21.825004   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:21.825016   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:21.849437   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:21.849449   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:21.853902   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:21.853911   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:21.865300   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:21.865311   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:21.879385   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:21.879396   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:21.917043   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:21.917056   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:21.939391   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:21.939401   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:21.956663   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:21.956673   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:21.967453   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:21.967465   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:21.979988   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:21.979999   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:21.991983   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:21.991994   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:22.004205   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:22.004219   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:24.544797   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:29.547190   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:29.547670   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:29.587646   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:29.587778   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:29.610008   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:29.610129   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:29.624818   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:29.624897   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:29.638452   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:29.638520   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:29.649462   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:29.649524   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:29.660566   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:29.660632   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:29.670460   17177 logs.go:276] 0 containers: []
	W0304 04:19:29.670471   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:29.670528   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:29.681027   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:29.681042   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:29.681048   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:29.715755   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:29.715769   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:29.737586   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:29.737598   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:29.751786   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:29.751796   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:29.769201   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:29.769212   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:29.775637   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:29.775649   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:29.794049   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:29.794093   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:29.806341   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:29.806352   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:29.828985   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:29.828992   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:29.868679   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:29.868693   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:29.882914   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:29.882927   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:29.894732   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:29.894744   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:29.911122   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:29.911132   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:29.923218   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:29.923230   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:29.937683   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:29.937698   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:29.948782   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:29.948792   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:32.462486   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:37.465153   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:37.465569   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:37.500350   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:37.500486   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:37.520807   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:37.520950   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:37.535276   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:37.535350   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:37.547635   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:37.547708   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:37.557879   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:37.557946   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:37.572652   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:37.572716   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:37.583407   17177 logs.go:276] 0 containers: []
	W0304 04:19:37.583424   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:37.583486   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:37.593838   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:37.593856   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:37.593862   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:37.629461   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:37.629472   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:37.643630   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:37.643639   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:37.656080   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:37.656092   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:37.678919   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:37.678928   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:37.690915   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:37.690929   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:37.695309   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:37.695317   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:37.714363   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:37.714374   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:37.735618   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:37.735630   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:37.747597   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:37.747610   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:37.761645   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:37.761657   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:37.801226   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:37.801236   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:37.812996   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:37.813010   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:37.830209   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:37.830222   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:37.846720   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:37.846732   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:37.858282   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:37.858293   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:40.374355   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:45.376738   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:45.377191   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:45.415656   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:45.415798   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:45.436555   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:45.436652   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:45.451614   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:45.451689   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:45.464599   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:45.464675   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:45.479605   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:45.479705   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:45.490564   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:45.490635   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:45.501386   17177 logs.go:276] 0 containers: []
	W0304 04:19:45.501397   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:45.501456   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:45.512019   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:45.512039   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:45.512045   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:45.516617   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:45.516627   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:45.535204   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:45.535216   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:45.552850   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:45.552861   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:45.570949   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:45.570965   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:45.588238   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:45.588248   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:45.600107   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:45.600121   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:45.637923   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:45.637931   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:45.649594   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:45.649606   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:45.661359   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:45.661371   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:45.675474   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:45.675484   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:45.689594   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:45.689604   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:45.703366   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:45.703375   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:45.715296   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:45.715309   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:45.739309   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:45.739326   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:45.775428   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:45.775440   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:48.289619   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:53.291907   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:53.292038   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:53.306326   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:53.306409   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:53.318866   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:53.318939   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:53.330029   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:53.330106   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:53.340637   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:53.340708   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:53.351246   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:53.351312   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:53.361638   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:53.361706   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:53.377471   17177 logs.go:276] 0 containers: []
	W0304 04:19:53.377484   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:53.377545   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:53.387639   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:53.387659   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:53.387664   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:53.402340   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:53.402350   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:53.426266   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:53.426280   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:53.461563   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:53.461576   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:53.473456   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:53.473468   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:53.485636   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:53.485650   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:53.502504   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:53.502517   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:53.517918   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:53.517929   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:53.557742   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:53.557755   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:53.577009   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:53.577019   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:53.588514   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:53.588527   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:53.600042   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:53.600055   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:53.616159   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:53.616173   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:53.620643   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:53.620660   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:53.635978   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:53.635989   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:53.653957   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:53.653970   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:56.169160   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:01.171520   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:01.171681   17177 kubeadm.go:640] restartCluster took 4m4.26874775s
	W0304 04:20:01.171828   17177 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I0304 04:20:01.171879   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0304 04:20:02.198529   17177 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.026644083s)
	I0304 04:20:02.198594   17177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0304 04:20:02.203547   17177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0304 04:20:02.206565   17177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0304 04:20:02.209274   17177 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0304 04:20:02.209289   17177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0304 04:20:02.227194   17177 kubeadm.go:322] [init] Using Kubernetes version: v1.24.1
	I0304 04:20:02.227225   17177 kubeadm.go:322] [preflight] Running pre-flight checks
	I0304 04:20:02.285936   17177 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0304 04:20:02.285994   17177 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0304 04:20:02.286048   17177 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0304 04:20:02.335505   17177 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0304 04:20:02.343595   17177 out.go:204]   - Generating certificates and keys ...
	I0304 04:20:02.343630   17177 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0304 04:20:02.343672   17177 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0304 04:20:02.343710   17177 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0304 04:20:02.343740   17177 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0304 04:20:02.343791   17177 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0304 04:20:02.343817   17177 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0304 04:20:02.343853   17177 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0304 04:20:02.343886   17177 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0304 04:20:02.343925   17177 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0304 04:20:02.343964   17177 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0304 04:20:02.343984   17177 kubeadm.go:322] [certs] Using the existing "sa" key
	I0304 04:20:02.344013   17177 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0304 04:20:02.477940   17177 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0304 04:20:02.628673   17177 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0304 04:20:02.770217   17177 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0304 04:20:03.008133   17177 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0304 04:20:03.038211   17177 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0304 04:20:03.038265   17177 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0304 04:20:03.038287   17177 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0304 04:20:03.122823   17177 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0304 04:20:03.128862   17177 out.go:204]   - Booting up control plane ...
	I0304 04:20:03.128910   17177 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0304 04:20:03.128944   17177 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0304 04:20:03.128986   17177 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0304 04:20:03.129021   17177 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0304 04:20:03.129093   17177 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0304 04:20:07.630303   17177 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.502751 seconds
	I0304 04:20:07.630439   17177 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0304 04:20:07.635499   17177 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0304 04:20:08.144593   17177 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0304 04:20:08.144741   17177 kubeadm.go:322] [mark-control-plane] Marking the node running-upgrade-156000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0304 04:20:08.650069   17177 kubeadm.go:322] [bootstrap-token] Using token: 7z17se.iszmbsipe7dpw0nb
	I0304 04:20:08.658964   17177 out.go:204]   - Configuring RBAC rules ...
	I0304 04:20:08.659055   17177 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0304 04:20:08.659113   17177 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0304 04:20:08.663959   17177 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0304 04:20:08.664858   17177 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0304 04:20:08.665864   17177 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0304 04:20:08.667154   17177 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0304 04:20:08.670847   17177 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0304 04:20:08.847107   17177 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0304 04:20:09.057318   17177 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0304 04:20:09.057827   17177 kubeadm.go:322] 
	I0304 04:20:09.057866   17177 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0304 04:20:09.057871   17177 kubeadm.go:322] 
	I0304 04:20:09.057915   17177 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0304 04:20:09.057920   17177 kubeadm.go:322] 
	I0304 04:20:09.057932   17177 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0304 04:20:09.057961   17177 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0304 04:20:09.057986   17177 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0304 04:20:09.057989   17177 kubeadm.go:322] 
	I0304 04:20:09.058017   17177 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0304 04:20:09.058021   17177 kubeadm.go:322] 
	I0304 04:20:09.058049   17177 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0304 04:20:09.058052   17177 kubeadm.go:322] 
	I0304 04:20:09.058079   17177 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0304 04:20:09.058118   17177 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0304 04:20:09.058163   17177 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0304 04:20:09.058168   17177 kubeadm.go:322] 
	I0304 04:20:09.058209   17177 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0304 04:20:09.058245   17177 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0304 04:20:09.058250   17177 kubeadm.go:322] 
	I0304 04:20:09.058293   17177 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7z17se.iszmbsipe7dpw0nb \
	I0304 04:20:09.058342   17177 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d9011201d4995caae6ff8661400631de0c6362c7df9a896efc3c38706beefef \
	I0304 04:20:09.058352   17177 kubeadm.go:322] 	--control-plane 
	I0304 04:20:09.058357   17177 kubeadm.go:322] 
	I0304 04:20:09.058407   17177 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0304 04:20:09.058411   17177 kubeadm.go:322] 
	I0304 04:20:09.058451   17177 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7z17se.iszmbsipe7dpw0nb \
	I0304 04:20:09.058503   17177 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d9011201d4995caae6ff8661400631de0c6362c7df9a896efc3c38706beefef 
	I0304 04:20:09.058647   17177 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0304 04:20:09.058656   17177 cni.go:84] Creating CNI manager for ""
	I0304 04:20:09.058663   17177 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:20:09.066574   17177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0304 04:20:09.070600   17177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0304 04:20:09.074523   17177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0304 04:20:09.079991   17177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0304 04:20:09.080042   17177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0304 04:20:09.080099   17177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ab57ba9f65fd4cb3ac8815e4f9baeeca5604e645 minikube.k8s.io/name=running-upgrade-156000 minikube.k8s.io/updated_at=2024_03_04T04_20_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0304 04:20:09.116659   17177 kubeadm.go:1088] duration metric: took 36.659416ms to wait for elevateKubeSystemPrivileges.
	I0304 04:20:09.116665   17177 ops.go:34] apiserver oom_adj: -16
	I0304 04:20:09.130813   17177 host.go:66] Checking if "running-upgrade-156000" exists ...
	I0304 04:20:09.131783   17177 main.go:141] libmachine: Using SSH client type: external
	I0304 04:20:09.131798   17177 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa (-rw-------)
	I0304 04:20:09.131813   17177 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa -p 52560] /usr/bin/ssh <nil>}
	I0304 04:20:09.131826   17177 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa -p 52560 -f -NTL 52592:localhost:8443
	I0304 04:20:09.168060   17177 kubeadm.go:406] StartCluster complete in 4m12.317517542s
	I0304 04:20:09.168107   17177 settings.go:142] acquiring lock: {Name:mk5ed2e5b4fa3bf37e56838441d7d3c0b1b72b0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:20:09.168272   17177 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:20:09.168831   17177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/kubeconfig: {Name:mkd9e78edd5ce89511d1f03c76ad35ee3697edbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:20:09.169098   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0304 04:20:09.169172   17177 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0304 04:20:09.169229   17177 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-156000"
	I0304 04:20:09.169246   17177 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-156000"
	W0304 04:20:09.169248   17177 addons.go:243] addon storage-provisioner should already be in state true
	I0304 04:20:09.169252   17177 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-156000"
	I0304 04:20:09.169267   17177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-156000"
	I0304 04:20:09.169281   17177 host.go:66] Checking if "running-upgrade-156000" exists ...
	I0304 04:20:09.169339   17177 config.go:182] Loaded profile config "running-upgrade-156000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:20:09.169438   17177 kapi.go:59] client config for running-upgrade-156000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/client.key", CAFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1038e77d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0304 04:20:09.170362   17177 kapi.go:59] client config for running-upgrade-156000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/client.key", CAFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1038e77d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0304 04:20:09.170460   17177 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-156000"
	W0304 04:20:09.170465   17177 addons.go:243] addon default-storageclass should already be in state true
	I0304 04:20:09.170473   17177 host.go:66] Checking if "running-upgrade-156000" exists ...
	I0304 04:20:09.174451   17177 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:20:09.178562   17177 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0304 04:20:09.178570   17177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0304 04:20:09.178591   17177 sshutil.go:53] new ssh client: &{IP:localhost Port:52560 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa Username:docker}
	I0304 04:20:09.179454   17177 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0304 04:20:09.179461   17177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0304 04:20:09.179466   17177 sshutil.go:53] new ssh client: &{IP:localhost Port:52560 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa Username:docker}
	I0304 04:20:09.199928   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           10.0.2.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0304 04:20:09.216921   17177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0304 04:20:09.250548   17177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0304 04:20:09.596804   17177 start.go:929] {"host.minikube.internal": 10.0.2.2} host record injected into CoreDNS's ConfigMap
	W0304 04:20:39.171810   17177 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "running-upgrade-156000" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	E0304 04:20:39.171823   17177 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	I0304 04:20:39.171834   17177 start.go:223] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:20:39.176151   17177 out.go:177] * Verifying Kubernetes components...
	I0304 04:20:39.182171   17177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0304 04:20:39.187313   17177 api_server.go:52] waiting for apiserver process to appear ...
	I0304 04:20:39.187358   17177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:20:39.191826   17177 api_server.go:72] duration metric: took 19.9805ms to wait for apiserver process to appear ...
	I0304 04:20:39.191834   17177 api_server.go:88] waiting for apiserver healthz status ...
	I0304 04:20:39.191841   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0304 04:20:39.623578   17177 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0304 04:20:39.627947   17177 out.go:177] * Enabled addons: storage-provisioner
	I0304 04:20:39.641850   17177 addons.go:505] enable addons completed in 30.472883084s: enabled=[storage-provisioner]
	I0304 04:20:44.193918   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:44.193979   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:49.194402   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:49.194436   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:54.194847   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:54.194891   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:59.195865   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:59.195905   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:04.196816   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:04.196857   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:09.198234   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:09.198276   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:14.199642   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:14.199685   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:19.201407   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:19.201428   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:24.203549   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:24.203572   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:29.205747   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:29.205789   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:34.208019   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:34.208056   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:39.210291   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:39.210458   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:39.230659   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:21:39.230761   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:39.245443   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:21:39.245525   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:39.257376   17177 logs.go:276] 2 containers: [bb28ce9fb69b d7ccee857a9c]
	I0304 04:21:39.257451   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:39.268026   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:21:39.268109   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:39.278094   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:21:39.278158   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:39.288615   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:21:39.288680   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:39.298559   17177 logs.go:276] 0 containers: []
	W0304 04:21:39.298570   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:39.298641   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:39.308701   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:21:39.308718   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:21:39.308723   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:21:39.320375   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:21:39.320388   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:21:39.335532   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:21:39.335541   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:21:39.346751   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:39.346763   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:39.386624   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:39.386635   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:39.390883   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:21:39.390891   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:21:39.404936   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:21:39.404947   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:21:39.421516   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:21:39.421526   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:21:39.433522   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:21:39.433531   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:21:39.444971   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:39.444984   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:39.469341   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:39.469351   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:39.509114   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:21:39.509126   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:21:39.526459   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:21:39.526469   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:42.040123   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:47.042330   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:47.042472   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:47.060024   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:21:47.060163   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:47.074427   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:21:47.074501   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:47.085964   17177 logs.go:276] 2 containers: [bb28ce9fb69b d7ccee857a9c]
	I0304 04:21:47.086030   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:47.095983   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:21:47.096051   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:47.109303   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:21:47.109366   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:47.119785   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:21:47.119856   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:47.130304   17177 logs.go:276] 0 containers: []
	W0304 04:21:47.130319   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:47.130393   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:47.141117   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:21:47.141134   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:47.141139   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:47.179650   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:47.179659   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:47.218712   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:21:47.218723   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:21:47.233074   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:21:47.233085   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:21:47.252406   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:21:47.252417   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:21:47.265061   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:21:47.265073   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:47.276512   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:47.276522   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:47.299932   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:47.299940   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:47.304692   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:21:47.304702   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:21:47.316662   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:21:47.316670   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:21:47.329433   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:21:47.329442   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:21:47.343736   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:21:47.343746   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:21:47.355687   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:21:47.355698   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:21:49.875334   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:54.877505   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:54.877692   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:54.888976   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:21:54.889063   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:54.904573   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:21:54.904635   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:54.915513   17177 logs.go:276] 2 containers: [bb28ce9fb69b d7ccee857a9c]
	I0304 04:21:54.915590   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:54.929354   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:21:54.929424   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:54.940291   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:21:54.940367   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:54.952551   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:21:54.952631   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:54.963654   17177 logs.go:276] 0 containers: []
	W0304 04:21:54.963665   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:54.963724   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:54.978988   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:21:54.979002   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:21:54.979009   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:21:54.990357   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:54.990368   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:54.994560   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:54.994567   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:55.030121   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:21:55.030134   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:21:55.044999   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:21:55.045011   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:21:55.060009   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:21:55.060020   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:21:55.072208   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:21:55.072221   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:21:55.086815   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:21:55.086825   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:21:55.098887   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:55.098898   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:55.137178   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:21:55.137187   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:21:55.149541   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:21:55.149550   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:21:55.167766   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:55.167776   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:55.192221   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:21:55.192227   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:57.706522   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:02.708766   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:02.709163   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:02.740751   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:02.740899   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:02.760709   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:02.760808   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:02.775606   17177 logs.go:276] 2 containers: [bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:02.775685   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:02.789137   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:02.789214   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:02.800083   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:02.800156   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:02.813836   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:02.813906   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:02.824163   17177 logs.go:276] 0 containers: []
	W0304 04:22:02.824173   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:02.824228   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:02.835011   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:02.835025   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:02.835030   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:02.852286   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:02.852297   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:02.863966   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:02.863976   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:02.869121   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:02.869131   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:02.905102   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:02.905113   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:02.927147   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:02.927159   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:02.939313   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:02.939326   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:02.955770   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:02.955782   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:02.967590   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:02.967598   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:02.990969   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:02.990977   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:03.002698   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:03.002711   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:03.040448   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:03.040456   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:03.062085   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:03.062097   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:05.577898   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:10.580141   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:10.580554   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:10.612808   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:10.612964   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:10.631809   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:10.631906   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:10.645692   17177 logs.go:276] 2 containers: [bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:10.645769   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:10.657843   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:10.657916   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:10.668538   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:10.668614   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:10.679216   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:10.679283   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:10.689386   17177 logs.go:276] 0 containers: []
	W0304 04:22:10.689408   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:10.689473   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:10.700002   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:10.700019   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:10.700025   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:10.723867   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:10.723877   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:10.735228   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:10.735239   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:10.739942   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:10.739951   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:10.775456   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:10.775466   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:10.794125   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:10.794135   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:10.806856   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:10.806870   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:10.819145   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:10.819156   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:10.831492   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:10.831503   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:10.871879   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:10.871896   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:10.886347   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:10.886357   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:10.898648   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:10.898659   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:10.913219   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:10.913229   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:13.434131   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:18.436337   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:18.436595   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:18.456767   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:18.456900   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:18.471918   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:18.471993   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:18.484279   17177 logs.go:276] 2 containers: [bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:18.484344   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:18.494938   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:18.495009   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:18.505361   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:18.505437   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:18.516061   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:18.516135   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:18.525893   17177 logs.go:276] 0 containers: []
	W0304 04:22:18.525902   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:18.525971   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:18.536746   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:18.536764   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:18.536770   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:18.576816   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:18.576825   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:18.581240   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:18.581247   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:18.615667   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:18.615681   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:18.634087   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:18.634099   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:18.648848   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:18.648857   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:18.661001   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:18.661013   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:18.674817   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:18.674828   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:18.687695   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:18.687709   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:18.699763   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:18.699777   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:18.717233   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:18.717243   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:18.728695   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:18.728705   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:18.753134   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:18.753141   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:21.268020   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:26.270284   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:26.270464   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:26.290230   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:26.290327   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:26.305323   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:26.305406   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:26.317695   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:26.317771   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:26.327855   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:26.327926   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:26.338692   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:26.338760   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:26.348799   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:26.348869   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:26.358747   17177 logs.go:276] 0 containers: []
	W0304 04:22:26.358758   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:26.358821   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:26.369124   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:26.369138   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:26.369143   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:26.383755   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:26.383767   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:26.395625   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:26.395637   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:26.410220   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:26.410231   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:26.422221   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:26.422232   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:26.461954   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:22:26.461968   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:22:26.473407   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:26.473417   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:26.485205   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:26.485218   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:26.501008   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:26.501017   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:26.518090   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:22:26.518105   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:22:26.529439   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:26.529449   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:26.547405   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:26.547415   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:26.552552   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:26.552561   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:26.593123   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:26.593134   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:26.616367   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:26.616373   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:29.129841   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:34.132157   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:34.132335   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:34.144208   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:34.144282   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:34.154934   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:34.155000   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:34.167195   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:34.167273   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:34.177419   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:34.177485   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:34.187733   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:34.187802   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:34.198416   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:34.198484   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:34.208848   17177 logs.go:276] 0 containers: []
	W0304 04:22:34.208869   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:34.208936   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:34.219365   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:34.219381   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:34.219387   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:34.297924   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:34.297938   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:34.315023   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:34.315037   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:34.320138   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:22:34.320146   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:22:34.332392   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:34.332401   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:34.344675   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:34.344687   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:34.358772   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:34.358785   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:34.372576   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:34.372593   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:34.385427   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:34.385441   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:34.400797   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:34.400806   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:34.412956   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:34.412970   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:34.424985   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:34.424996   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:34.463819   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:22:34.463831   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:22:34.474896   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:34.474910   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:34.486113   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:34.486122   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:37.011128   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:42.013551   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:42.013851   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:42.043988   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:42.044119   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:42.062848   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:42.062945   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:42.089820   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:42.089894   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:42.100966   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:42.101034   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:42.113747   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:42.113817   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:42.124280   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:42.124353   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:42.134903   17177 logs.go:276] 0 containers: []
	W0304 04:22:42.134914   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:42.134990   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:42.150571   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:42.150589   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:22:42.150594   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:22:42.165712   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:42.165725   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:42.181590   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:42.181602   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:42.207011   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:42.207020   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:42.247532   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:42.247542   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:42.252550   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:42.252559   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:42.266933   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:42.266944   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:42.284313   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:42.284324   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:42.295812   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:42.295824   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:42.335304   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:42.335315   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:42.347339   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:42.347349   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:42.362493   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:22:42.362504   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:22:42.373869   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:42.373880   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:42.386526   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:42.386536   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:42.401150   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:42.401159   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:44.915564   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:49.918078   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:49.918342   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:49.945604   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:49.945710   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:49.963552   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:49.963645   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:49.978183   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:49.978247   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:49.989400   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:49.989460   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:49.999197   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:49.999268   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:50.009379   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:50.009438   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:50.019494   17177 logs.go:276] 0 containers: []
	W0304 04:22:50.019506   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:50.019566   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:50.030015   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:50.030031   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:50.030036   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:50.068884   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:50.068900   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:50.092220   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:50.092229   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:50.106538   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:22:50.106552   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:22:50.119280   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:50.119294   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:50.131197   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:50.131211   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:50.145203   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:50.145216   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:50.185946   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:50.185957   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:50.199600   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:50.199612   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:50.211539   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:50.211548   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:50.223104   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:50.223118   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:50.234706   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:22:50.234720   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:22:50.246692   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:50.246702   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:50.258832   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:50.258846   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:50.274020   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:50.274031   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:52.792790   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:57.795659   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:57.796033   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:57.836536   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:57.836649   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:57.852888   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:57.852978   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:57.865797   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:57.865876   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:57.877365   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:57.877425   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:57.887991   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:57.888059   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:57.899428   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:57.899491   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:57.910218   17177 logs.go:276] 0 containers: []
	W0304 04:22:57.910227   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:57.910279   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:57.923329   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:57.923347   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:57.923352   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:57.942103   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:57.942114   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:57.953919   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:57.953931   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:57.991316   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:22:57.991324   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:22:58.011583   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:22:58.011595   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:22:58.023892   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:58.023905   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:58.042375   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:58.042384   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:58.047120   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:58.047129   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:58.062278   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:58.062293   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:58.085839   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:58.085862   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:58.097380   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:58.097391   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:58.113007   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:58.113018   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:58.126937   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:58.126947   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:58.139187   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:58.139202   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:58.151216   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:58.151227   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:00.711109   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:05.712812   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:05.713014   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:05.729631   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:05.729706   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:05.742447   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:05.742512   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:05.754187   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:05.754271   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:05.765076   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:05.765148   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:05.777963   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:05.778031   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:05.788770   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:05.788842   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:05.799508   17177 logs.go:276] 0 containers: []
	W0304 04:23:05.799523   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:05.799580   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:05.810003   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:05.810019   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:05.810025   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:05.854096   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:05.854107   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:05.866137   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:05.866152   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:05.881129   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:05.881141   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:05.898515   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:05.898525   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:05.923867   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:05.923875   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:05.936165   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:05.936175   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:05.947777   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:05.947788   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:05.985693   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:05.985700   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:05.997835   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:05.997847   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:06.013554   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:06.013567   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:06.024824   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:06.024838   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:06.029183   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:06.029188   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:06.050689   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:06.050700   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:06.064776   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:06.064788   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:08.578032   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:13.580285   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:13.580396   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:13.592526   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:13.592599   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:13.604077   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:13.604152   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:13.615470   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:13.615543   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:13.626013   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:13.626090   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:13.640801   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:13.640877   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:13.653030   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:13.653103   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:13.664902   17177 logs.go:276] 0 containers: []
	W0304 04:23:13.664915   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:13.664977   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:13.678017   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:13.678036   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:13.678043   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:13.682790   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:13.682800   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:13.708566   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:13.708576   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:13.723993   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:13.724003   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:13.741182   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:13.741190   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:13.777305   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:13.777319   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:13.792358   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:13.792369   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:13.806679   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:13.806688   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:13.818946   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:13.818957   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:13.830597   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:13.830607   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:13.842787   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:13.842797   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:13.880421   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:13.880429   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:13.899956   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:13.899967   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:13.911857   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:13.911866   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:13.923677   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:13.923691   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:16.438219   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:21.440271   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:21.440532   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:21.462153   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:21.462264   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:21.477127   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:21.477221   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:21.490202   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:21.490276   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:21.500857   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:21.500918   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:21.511204   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:21.511275   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:21.528334   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:21.528404   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:21.539802   17177 logs.go:276] 0 containers: []
	W0304 04:23:21.539815   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:21.539874   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:21.553523   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:21.553540   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:21.553545   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:21.565552   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:21.565561   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:21.583208   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:21.583219   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:21.594555   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:21.594569   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:21.599056   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:21.599065   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:21.618119   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:21.618127   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:21.629857   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:21.629871   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:21.641210   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:21.641221   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:21.655748   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:21.655757   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:21.675824   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:21.675836   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:21.699456   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:21.699466   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:21.738245   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:21.738253   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:21.774489   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:21.774503   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:21.786205   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:21.786215   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:21.798573   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:21.798585   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:24.315611   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:29.317966   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:29.318116   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:29.329210   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:29.329279   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:29.340320   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:29.340398   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:29.350798   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:29.350873   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:29.361041   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:29.361115   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:29.371895   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:29.371963   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:29.382844   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:29.382916   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:29.397715   17177 logs.go:276] 0 containers: []
	W0304 04:23:29.397728   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:29.397786   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:29.413021   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:29.413037   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:29.413043   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:29.428125   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:29.428138   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:29.446114   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:29.446125   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:29.471322   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:29.471337   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:29.486310   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:29.486323   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:29.498140   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:29.498180   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:29.510116   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:29.510126   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:29.522631   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:29.522644   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:29.534292   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:29.534303   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:29.573400   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:29.573410   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:29.610825   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:29.610841   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:29.625317   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:29.625328   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:29.637782   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:29.637796   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:29.650355   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:29.650371   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:29.655174   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:29.655192   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:32.169713   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:37.171984   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:37.172205   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:37.193603   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:37.193703   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:37.209337   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:37.209416   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:37.222412   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:37.222480   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:37.233570   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:37.233641   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:37.244207   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:37.244273   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:37.261333   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:37.261396   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:37.281590   17177 logs.go:276] 0 containers: []
	W0304 04:23:37.281602   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:37.281661   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:37.294559   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:37.294576   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:37.294581   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:37.331111   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:37.331123   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:37.346984   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:37.346997   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:37.358904   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:37.358919   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:37.370575   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:37.370587   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:37.391769   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:37.391779   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:37.430352   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:37.430360   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:37.443796   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:37.443805   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:37.461355   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:37.461365   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:37.479183   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:37.479193   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:37.490995   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:37.491007   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:37.515305   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:37.515312   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:37.526966   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:37.526975   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:37.532209   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:37.532217   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:37.546470   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:37.546483   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:40.060531   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:45.063179   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:45.063453   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:45.088149   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:45.088268   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:45.104003   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:45.104090   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:45.119544   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:45.119616   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:45.130355   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:45.130428   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:45.140343   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:45.140410   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:45.151145   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:45.151205   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:45.161438   17177 logs.go:276] 0 containers: []
	W0304 04:23:45.161450   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:45.161517   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:45.177916   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:45.177932   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:45.177938   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:45.218363   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:45.218371   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:45.230012   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:45.230025   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:45.244181   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:45.244192   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:45.255877   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:45.255887   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:45.280098   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:45.280106   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:45.291724   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:45.291733   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:45.296597   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:45.296607   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:45.333017   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:45.333028   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:45.347863   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:45.347875   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:45.366253   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:45.366264   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:45.378143   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:45.378152   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:45.396286   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:45.396300   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:45.410496   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:45.410505   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:45.422342   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:45.422355   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:47.943379   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:52.945917   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:52.946065   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:52.961627   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:52.961710   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:52.974181   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:52.974261   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:52.985724   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:52.985789   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:52.996509   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:52.996584   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:53.006686   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:53.006749   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:53.016859   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:53.016936   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:53.027043   17177 logs.go:276] 0 containers: []
	W0304 04:23:53.027054   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:53.027111   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:53.038299   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:53.038317   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:53.038322   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:53.052569   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:53.052581   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:53.064883   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:53.064895   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:53.080099   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:53.080110   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:53.098294   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:53.098307   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:53.133453   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:53.133467   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:53.144983   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:53.144998   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:53.156329   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:53.156341   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:53.181054   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:53.181065   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:53.193467   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:53.193481   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:53.207695   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:53.207707   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:53.219564   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:53.219578   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:53.258495   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:53.258504   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:53.270356   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:53.270366   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:53.281848   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:53.281862   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:55.788492   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:00.790565   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:00.790734   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:24:00.804876   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:24:00.804959   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:24:00.816468   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:24:00.816535   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:24:00.827140   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:24:00.827210   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:24:00.837223   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:24:00.837287   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:24:00.848641   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:24:00.848714   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:24:00.859495   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:24:00.859563   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:24:00.869338   17177 logs.go:276] 0 containers: []
	W0304 04:24:00.869349   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:24:00.869409   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:24:00.879465   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:24:00.879479   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:24:00.879484   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:24:00.918168   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:24:00.918177   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:24:00.930614   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:24:00.930628   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:24:00.942991   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:24:00.943003   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:24:00.957676   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:24:00.957690   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:24:00.975248   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:24:00.975259   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:24:00.987196   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:24:00.987209   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:24:01.010037   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:24:01.010045   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:24:01.021725   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:24:01.021734   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:24:01.057571   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:24:01.057583   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:24:01.072764   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:24:01.072775   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:24:01.085686   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:24:01.085696   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:24:01.097402   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:24:01.097413   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:24:01.101725   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:24:01.101732   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:24:01.115374   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:24:01.115383   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:24:03.632264   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:08.634865   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:08.635257   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:24:08.674424   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:24:08.674519   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:24:08.691457   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:24:08.691530   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:24:08.703516   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:24:08.703591   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:24:08.714578   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:24:08.714655   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:24:08.725121   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:24:08.725198   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:24:08.736089   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:24:08.736153   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:24:08.746501   17177 logs.go:276] 0 containers: []
	W0304 04:24:08.746512   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:24:08.746573   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:24:08.756502   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:24:08.756518   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:24:08.756523   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:24:08.770696   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:24:08.770706   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:24:08.782528   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:24:08.782538   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:24:08.821082   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:24:08.821093   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:24:08.832849   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:24:08.832860   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:24:08.852113   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:24:08.852123   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:24:08.863569   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:24:08.863580   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:24:08.876257   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:24:08.876267   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:24:08.888408   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:24:08.888419   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:24:08.904388   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:24:08.904399   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:24:08.919267   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:24:08.919279   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:24:08.947366   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:24:08.947376   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:24:08.982308   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:24:08.982318   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:24:08.997908   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:24:08.997916   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:24:09.021582   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:24:09.021593   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:24:11.527875   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:16.530107   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:16.530209   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:24:16.541266   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:24:16.541339   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:24:16.551450   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:24:16.551513   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:24:16.562340   17177 logs.go:276] 4 containers: [0d8f3b5bfecb 705f11ca41c7 ac9fa5422c38 127d7b70714a]
	I0304 04:24:16.562420   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:24:16.572772   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:24:16.572836   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:24:16.583214   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:24:16.583280   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:24:16.594192   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:24:16.594248   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:24:16.604355   17177 logs.go:276] 0 containers: []
	W0304 04:24:16.604369   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:24:16.604422   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:24:16.614778   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:24:16.614797   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:24:16.614801   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:24:16.629093   17177 logs.go:123] Gathering logs for coredns [0d8f3b5bfecb] ...
	I0304 04:24:16.629103   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8f3b5bfecb"
	I0304 04:24:16.640381   17177 logs.go:123] Gathering logs for coredns [705f11ca41c7] ...
	I0304 04:24:16.640392   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705f11ca41c7"
	I0304 04:24:16.652050   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:24:16.652061   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:24:16.691443   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:24:16.691452   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:24:16.696499   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:24:16.696505   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:24:16.709049   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:24:16.709063   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:24:16.721496   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:24:16.721510   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:24:16.741029   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:24:16.741045   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:24:16.757214   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:24:16.757228   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:24:16.771060   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:24:16.771070   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:24:16.810112   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:24:16.810124   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:24:16.824151   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:24:16.824162   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:24:16.845819   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:24:16.845829   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:24:16.857558   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:24:16.857570   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:24:19.383106   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:24.385357   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:24.385491   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:24:24.399100   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:24:24.399176   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:24:24.409972   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:24:24.410050   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:24:24.420723   17177 logs.go:276] 4 containers: [0d8f3b5bfecb 705f11ca41c7 ac9fa5422c38 127d7b70714a]
	I0304 04:24:24.420796   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:24:24.431930   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:24:24.431999   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:24:24.442508   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:24:24.442571   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:24:24.454644   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:24:24.454715   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:24:24.464633   17177 logs.go:276] 0 containers: []
	W0304 04:24:24.464643   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:24:24.464699   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:24:24.474905   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:24:24.474919   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:24:24.474932   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:24:24.496121   17177 logs.go:123] Gathering logs for coredns [0d8f3b5bfecb] ...
	I0304 04:24:24.496132   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8f3b5bfecb"
	I0304 04:24:24.507763   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:24:24.507776   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:24:24.519746   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:24:24.519757   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:24:24.542660   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:24:24.542669   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:24:24.554250   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:24:24.554261   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:24:24.590959   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:24:24.590971   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:24:24.608315   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:24:24.608326   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:24:24.624124   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:24:24.624136   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:24:24.638708   17177 logs.go:123] Gathering logs for coredns [705f11ca41c7] ...
	I0304 04:24:24.638721   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705f11ca41c7"
	I0304 04:24:24.650519   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:24:24.650530   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:24:24.666108   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:24:24.666121   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:24:24.678044   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:24:24.678054   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:24:24.682270   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:24:24.682276   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:24:24.695313   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:24:24.695327   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:24:27.236987   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:32.238436   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:32.238702   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:24:32.264791   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:24:32.264916   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:24:32.281870   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:24:32.281949   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:24:32.295349   17177 logs.go:276] 4 containers: [0d8f3b5bfecb 705f11ca41c7 ac9fa5422c38 127d7b70714a]
	I0304 04:24:32.295426   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:24:32.307083   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:24:32.307150   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:24:32.318177   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:24:32.318252   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:24:32.329098   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:24:32.329197   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:24:32.339400   17177 logs.go:276] 0 containers: []
	W0304 04:24:32.339415   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:24:32.339490   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:24:32.349907   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:24:32.349924   17177 logs.go:123] Gathering logs for coredns [705f11ca41c7] ...
	I0304 04:24:32.349928   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705f11ca41c7"
	I0304 04:24:32.361958   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:24:32.361971   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:24:32.376896   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:24:32.376911   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:24:32.394835   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:24:32.394846   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:24:32.433442   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:24:32.433451   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:24:32.445279   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:24:32.445290   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:24:32.456701   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:24:32.456712   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:24:32.468277   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:24:32.468288   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:24:32.472811   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:24:32.472821   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:24:32.484868   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:24:32.484879   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:24:32.496099   17177 logs.go:123] Gathering logs for coredns [0d8f3b5bfecb] ...
	I0304 04:24:32.496112   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8f3b5bfecb"
	I0304 04:24:32.510539   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:24:32.510551   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:24:32.532695   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:24:32.532703   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:24:32.568827   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:24:32.568837   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:24:32.582866   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:24:32.582876   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:24:35.099128   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:40.101554   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:40.106683   17177 out.go:177] 
	W0304 04:24:40.111545   17177 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0304 04:24:40.111575   17177 out.go:239] * 
	* 
	W0304 04:24:40.113516   17177 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:24:40.119628   17177 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-156000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-04 04:24:40.231115 -0800 PST m=+1226.830480918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-156000 -n running-upgrade-156000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-156000 -n running-upgrade-156000: exit status 2 (15.694616292s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-156000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-322000          | force-systemd-flag-322000 | jenkins | v1.32.0 | 04 Mar 24 04:13 PST |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-315000              | force-systemd-env-315000  | jenkins | v1.32.0 | 04 Mar 24 04:13 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-315000           | force-systemd-env-315000  | jenkins | v1.32.0 | 04 Mar 24 04:13 PST | 04 Mar 24 04:13 PST |
	| start   | -p docker-flags-169000                | docker-flags-169000       | jenkins | v1.32.0 | 04 Mar 24 04:13 PST |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-322000             | force-systemd-flag-322000 | jenkins | v1.32.0 | 04 Mar 24 04:14 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-322000          | force-systemd-flag-322000 | jenkins | v1.32.0 | 04 Mar 24 04:14 PST | 04 Mar 24 04:14 PST |
	| start   | -p cert-expiration-323000             | cert-expiration-323000    | jenkins | v1.32.0 | 04 Mar 24 04:14 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-169000 ssh               | docker-flags-169000       | jenkins | v1.32.0 | 04 Mar 24 04:14 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-169000 ssh               | docker-flags-169000       | jenkins | v1.32.0 | 04 Mar 24 04:14 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-169000                | docker-flags-169000       | jenkins | v1.32.0 | 04 Mar 24 04:14 PST | 04 Mar 24 04:14 PST |
	| start   | -p cert-options-861000                | cert-options-861000       | jenkins | v1.32.0 | 04 Mar 24 04:14 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-861000 ssh               | cert-options-861000       | jenkins | v1.32.0 | 04 Mar 24 04:14 PST |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-861000 -- sudo        | cert-options-861000       | jenkins | v1.32.0 | 04 Mar 24 04:14 PST |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-861000                | cert-options-861000       | jenkins | v1.32.0 | 04 Mar 24 04:14 PST | 04 Mar 24 04:14 PST |
	| start   | -p running-upgrade-156000             | minikube                  | jenkins | v1.26.0 | 04 Mar 24 04:14 PST | 04 Mar 24 04:15 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-156000             | running-upgrade-156000    | jenkins | v1.32.0 | 04 Mar 24 04:15 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-323000             | cert-expiration-323000    | jenkins | v1.32.0 | 04 Mar 24 04:17 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-323000             | cert-expiration-323000    | jenkins | v1.32.0 | 04 Mar 24 04:17 PST | 04 Mar 24 04:17 PST |
	| start   | -p kubernetes-upgrade-323000          | kubernetes-upgrade-323000 | jenkins | v1.32.0 | 04 Mar 24 04:17 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-323000          | kubernetes-upgrade-323000 | jenkins | v1.32.0 | 04 Mar 24 04:17 PST | 04 Mar 24 04:17 PST |
	| start   | -p kubernetes-upgrade-323000          | kubernetes-upgrade-323000 | jenkins | v1.32.0 | 04 Mar 24 04:17 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-323000          | kubernetes-upgrade-323000 | jenkins | v1.32.0 | 04 Mar 24 04:17 PST | 04 Mar 24 04:17 PST |
	| start   | -p stopped-upgrade-289000             | minikube                  | jenkins | v1.26.0 | 04 Mar 24 04:17 PST | 04 Mar 24 04:18 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-289000 stop           | minikube                  | jenkins | v1.26.0 | 04 Mar 24 04:18 PST | 04 Mar 24 04:18 PST |
	| start   | -p stopped-upgrade-289000             | stopped-upgrade-289000    | jenkins | v1.32.0 | 04 Mar 24 04:18 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/04 04:18:34
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0304 04:18:34.280959   17343 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:18:34.281131   17343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:18:34.281135   17343 out.go:304] Setting ErrFile to fd 2...
	I0304 04:18:34.281138   17343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:18:34.281291   17343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:18:34.282679   17343 out.go:298] Setting JSON to false
	I0304 04:18:34.303418   17343 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10086,"bootTime":1709544628,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:18:34.303492   17343 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:18:34.307863   17343 out.go:177] * [stopped-upgrade-289000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:18:34.314881   17343 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:18:34.315000   17343 notify.go:220] Checking for updates...
	I0304 04:18:34.318805   17343 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:18:34.321896   17343 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:18:34.324879   17343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:18:34.327859   17343 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:18:34.330877   17343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:18:34.334097   17343 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:18:34.336833   17343 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0304 04:18:34.339877   17343 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:18:34.343684   17343 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:18:34.350801   17343 start.go:299] selected driver: qemu2
	I0304 04:18:34.350811   17343 start.go:903] validating driver "qemu2" against &{Name:stopped-upgrade-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52792 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:sto
pped-upgrade-289000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0304 04:18:34.350874   17343 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:18:34.353717   17343 cni.go:84] Creating CNI manager for ""
	I0304 04:18:34.353741   17343 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:18:34.353746   17343 start_flags.go:323] config:
	{Name:stopped-upgrade-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52792 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-289000 Namespace:default APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0304 04:18:34.353838   17343 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:18:34.360803   17343 out.go:177] * Starting control plane node stopped-upgrade-289000 in cluster stopped-upgrade-289000
	I0304 04:18:34.364771   17343 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0304 04:18:34.364804   17343 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0304 04:18:34.364811   17343 cache.go:56] Caching tarball of preloaded images
	I0304 04:18:34.364890   17343 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:18:34.364897   17343 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.1 on docker
	I0304 04:18:34.364962   17343 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/config.json ...
	I0304 04:18:34.365330   17343 start.go:365] acquiring machines lock for stopped-upgrade-289000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:18:34.365370   17343 start.go:369] acquired machines lock for "stopped-upgrade-289000" in 32.083µs
	I0304 04:18:34.365383   17343 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:18:34.365387   17343 fix.go:54] fixHost starting: 
	I0304 04:18:34.365499   17343 fix.go:102] recreateIfNeeded on stopped-upgrade-289000: state=Stopped err=<nil>
	W0304 04:18:34.365509   17343 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:18:34.368878   17343 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-289000" ...
	I0304 04:18:34.202971   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:18:34.203316   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:18:34.238201   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:18:34.238307   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:18:34.259506   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:18:34.259589   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:18:34.275179   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:18:34.275243   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:18:34.287630   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:18:34.287685   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:18:34.298794   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:18:34.298864   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:18:34.309636   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:18:34.309696   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:18:34.319610   17177 logs.go:276] 0 containers: []
	W0304 04:18:34.319620   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:18:34.319663   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:18:34.329889   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:18:34.329904   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:18:34.329909   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:18:34.350616   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:18:34.350624   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:18:34.365133   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:18:34.365142   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:18:34.383571   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:18:34.383582   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:18:34.395528   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:18:34.395540   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:18:34.409843   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:18:34.409866   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:18:34.423061   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:18:34.423074   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:18:34.438792   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:18:34.438804   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:18:34.450932   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:18:34.450942   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:18:34.463163   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:18:34.463175   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:18:34.481202   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:18:34.481212   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:18:34.493263   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:18:34.493273   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:18:34.521737   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:18:34.521756   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:18:34.563744   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:18:34.563760   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:18:34.568443   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:18:34.568452   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:18:34.607133   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:18:34.607146   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:18:37.127073   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:18:34.376903   17343 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52757-:22,hostfwd=tcp::52758-:2376,hostname=stopped-upgrade-289000 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/disk.qcow2
	I0304 04:18:34.426347   17343 main.go:141] libmachine: STDOUT: 
	I0304 04:18:34.426374   17343 main.go:141] libmachine: STDERR: 
	I0304 04:18:34.426380   17343 main.go:141] libmachine: Waiting for VM to start (ssh -p 52757 docker@127.0.0.1)...
	I0304 04:18:42.129502   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:18:42.129631   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:18:42.141890   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:18:42.141967   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:18:42.153412   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:18:42.153491   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:18:42.164210   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:18:42.164275   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:18:42.174833   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:18:42.174907   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:18:42.185826   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:18:42.185920   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:18:42.196780   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:18:42.196845   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:18:42.206556   17177 logs.go:276] 0 containers: []
	W0304 04:18:42.206566   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:18:42.206615   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:18:42.218877   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:18:42.218894   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:18:42.218899   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:18:42.236441   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:18:42.236457   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:18:42.250815   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:18:42.250829   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:18:42.292175   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:18:42.292183   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:18:42.304134   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:18:42.304143   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:18:42.329301   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:18:42.329308   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:18:42.341432   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:18:42.341442   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:18:42.353769   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:18:42.353781   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:18:42.391918   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:18:42.391929   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:18:42.411148   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:18:42.411157   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:18:42.423935   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:18:42.423951   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:18:42.428601   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:18:42.428608   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:18:42.451081   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:18:42.451092   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:18:42.463882   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:18:42.463893   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:18:42.476191   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:18:42.476202   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:18:42.494000   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:18:42.494011   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:18:45.010244   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:18:50.012519   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:18:50.012969   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:18:50.049910   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:18:50.050052   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:18:50.077131   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:18:50.077222   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:18:50.090914   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:18:50.090988   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:18:50.102238   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:18:50.102309   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:18:50.113103   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:18:50.113170   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:18:50.123648   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:18:50.123710   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:18:50.133884   17177 logs.go:276] 0 containers: []
	W0304 04:18:50.133895   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:18:50.133953   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:18:50.144484   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:18:50.144500   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:18:50.144506   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:18:50.162414   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:18:50.162427   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:18:50.174685   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:18:50.174696   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:18:50.186438   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:18:50.186452   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:18:50.222195   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:18:50.222209   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:18:50.236778   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:18:50.236789   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:18:50.254815   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:18:50.254829   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:18:50.279635   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:18:50.279642   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:18:50.293436   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:18:50.293447   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:18:50.304983   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:18:50.304995   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:18:50.318994   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:18:50.319008   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:18:50.337993   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:18:50.338002   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:18:50.349775   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:18:50.349785   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:18:50.362456   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:18:50.362469   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:18:50.400403   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:18:50.400409   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:18:50.404450   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:18:50.404460   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:18:52.919577   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:18:54.876178   17343 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/config.json ...
	I0304 04:18:54.876837   17343 machine.go:88] provisioning docker machine ...
	I0304 04:18:54.876909   17343 buildroot.go:166] provisioning hostname "stopped-upgrade-289000"
	I0304 04:18:54.877037   17343 main.go:141] libmachine: Using SSH client type: native
	I0304 04:18:54.877463   17343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101259a30] 0x10125c290 <nil>  [] 0s} localhost 52757 <nil> <nil>}
	I0304 04:18:54.877479   17343 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-289000 && echo "stopped-upgrade-289000" | sudo tee /etc/hostname
	I0304 04:18:54.977148   17343 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-289000
	
	I0304 04:18:54.977275   17343 main.go:141] libmachine: Using SSH client type: native
	I0304 04:18:54.977471   17343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101259a30] 0x10125c290 <nil>  [] 0s} localhost 52757 <nil> <nil>}
	I0304 04:18:54.977483   17343 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-289000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-289000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-289000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0304 04:18:55.058599   17343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0304 04:18:55.058614   17343 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18284-15061/.minikube CaCertPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18284-15061/.minikube}
	I0304 04:18:55.058634   17343 buildroot.go:174] setting up certificates
	I0304 04:18:55.058645   17343 provision.go:83] configureAuth start
	I0304 04:18:55.058650   17343 provision.go:138] copyHostCerts
	I0304 04:18:55.058737   17343 exec_runner.go:144] found /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.pem, removing ...
	I0304 04:18:55.058748   17343 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.pem
	I0304 04:18:55.058879   17343 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.pem (1082 bytes)
	I0304 04:18:55.059106   17343 exec_runner.go:144] found /Users/jenkins/minikube-integration/18284-15061/.minikube/cert.pem, removing ...
	I0304 04:18:55.059111   17343 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18284-15061/.minikube/cert.pem
	I0304 04:18:55.059182   17343 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18284-15061/.minikube/cert.pem (1123 bytes)
	I0304 04:18:55.059329   17343 exec_runner.go:144] found /Users/jenkins/minikube-integration/18284-15061/.minikube/key.pem, removing ...
	I0304 04:18:55.059334   17343 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18284-15061/.minikube/key.pem
	I0304 04:18:55.059444   17343 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18284-15061/.minikube/key.pem (1679 bytes)
	I0304 04:18:55.059559   17343 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-289000 san=[127.0.0.1 localhost localhost 127.0.0.1 minikube stopped-upgrade-289000]
	I0304 04:18:55.131470   17343 provision.go:172] copyRemoteCerts
	I0304 04:18:55.131504   17343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0304 04:18:55.131512   17343 sshutil.go:53] new ssh client: &{IP:localhost Port:52757 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa Username:docker}
	I0304 04:18:55.168365   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0304 04:18:55.175413   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0304 04:18:55.182536   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0304 04:18:55.189143   17343 provision.go:86] duration metric: configureAuth took 130.489917ms
	I0304 04:18:55.189155   17343 buildroot.go:189] setting minikube options for container-runtime
	I0304 04:18:55.189250   17343 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:18:55.189287   17343 main.go:141] libmachine: Using SSH client type: native
	I0304 04:18:55.189373   17343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101259a30] 0x10125c290 <nil>  [] 0s} localhost 52757 <nil> <nil>}
	I0304 04:18:55.189378   17343 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0304 04:18:55.262233   17343 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0304 04:18:55.262242   17343 buildroot.go:70] root file system type: tmpfs
	I0304 04:18:55.262293   17343 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0304 04:18:55.262338   17343 main.go:141] libmachine: Using SSH client type: native
	I0304 04:18:55.262440   17343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101259a30] 0x10125c290 <nil>  [] 0s} localhost 52757 <nil> <nil>}
	I0304 04:18:55.262475   17343 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0304 04:18:55.337253   17343 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0304 04:18:55.337315   17343 main.go:141] libmachine: Using SSH client type: native
	I0304 04:18:55.337428   17343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101259a30] 0x10125c290 <nil>  [] 0s} localhost 52757 <nil> <nil>}
	I0304 04:18:55.337436   17343 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0304 04:18:55.696107   17343 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0304 04:18:55.696121   17343 machine.go:91] provisioned docker machine in 819.279583ms
	I0304 04:18:55.696130   17343 start.go:300] post-start starting for "stopped-upgrade-289000" (driver="qemu2")
	I0304 04:18:55.696137   17343 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0304 04:18:55.696200   17343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0304 04:18:55.696209   17343 sshutil.go:53] new ssh client: &{IP:localhost Port:52757 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa Username:docker}
	I0304 04:18:55.736673   17343 ssh_runner.go:195] Run: cat /etc/os-release
	I0304 04:18:55.737929   17343 info.go:137] Remote host: Buildroot 2021.02.12
	I0304 04:18:55.737937   17343 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18284-15061/.minikube/addons for local assets ...
	I0304 04:18:55.738021   17343 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18284-15061/.minikube/files for local assets ...
	I0304 04:18:55.738142   17343 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/154862.pem -> 154862.pem in /etc/ssl/certs
	I0304 04:18:55.738266   17343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0304 04:18:55.740712   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/154862.pem --> /etc/ssl/certs/154862.pem (1708 bytes)
	I0304 04:18:55.747646   17343 start.go:303] post-start completed in 51.51075ms
	I0304 04:18:55.747653   17343 fix.go:56] fixHost completed within 21.382393625s
	I0304 04:18:55.747688   17343 main.go:141] libmachine: Using SSH client type: native
	I0304 04:18:55.747786   17343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101259a30] 0x10125c290 <nil>  [] 0s} localhost 52757 <nil> <nil>}
	I0304 04:18:55.747790   17343 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0304 04:18:55.820195   17343 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709554736.280054837
	
	I0304 04:18:55.820203   17343 fix.go:206] guest clock: 1709554736.280054837
	I0304 04:18:55.820207   17343 fix.go:219] Guest: 2024-03-04 04:18:56.280054837 -0800 PST Remote: 2024-03-04 04:18:55.747655 -0800 PST m=+21.501023126 (delta=532.399837ms)
	I0304 04:18:55.820220   17343 fix.go:190] guest clock delta is within tolerance: 532.399837ms
	I0304 04:18:55.820223   17343 start.go:83] releasing machines lock for "stopped-upgrade-289000", held for 21.454972084s
	I0304 04:18:55.820297   17343 ssh_runner.go:195] Run: cat /version.json
	I0304 04:18:55.820306   17343 sshutil.go:53] new ssh client: &{IP:localhost Port:52757 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa Username:docker}
	I0304 04:18:55.820362   17343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0304 04:18:55.820403   17343 sshutil.go:53] new ssh client: &{IP:localhost Port:52757 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa Username:docker}
	W0304 04:18:55.821007   17343 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52757: connect: connection refused
	I0304 04:18:55.821031   17343 retry.go:31] will retry after 181.279797ms: dial tcp [::1]:52757: connect: connection refused
	W0304 04:18:56.053756   17343 start.go:420] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0304 04:18:56.053936   17343 ssh_runner.go:195] Run: systemctl --version
	I0304 04:18:56.057890   17343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0304 04:18:56.061113   17343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0304 04:18:56.061174   17343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0304 04:18:56.066392   17343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0304 04:18:56.075676   17343 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0304 04:18:56.075691   17343 start.go:475] detecting cgroup driver to use...
	I0304 04:18:56.075804   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0304 04:18:56.086301   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0304 04:18:56.090299   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0304 04:18:56.093880   17343 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0304 04:18:56.093911   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0304 04:18:56.097576   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0304 04:18:56.100987   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0304 04:18:56.104315   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0304 04:18:56.107940   17343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0304 04:18:56.111272   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0304 04:18:56.114352   17343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0304 04:18:56.116932   17343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0304 04:18:56.119804   17343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:18:56.190473   17343 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0304 04:18:56.197378   17343 start.go:475] detecting cgroup driver to use...
	I0304 04:18:56.197445   17343 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0304 04:18:56.205782   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0304 04:18:56.211671   17343 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0304 04:18:56.218790   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0304 04:18:56.223770   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0304 04:18:56.229387   17343 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0304 04:18:56.291148   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0304 04:18:56.296603   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0304 04:18:56.302055   17343 ssh_runner.go:195] Run: which cri-dockerd
	I0304 04:18:56.303393   17343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0304 04:18:56.306479   17343 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0304 04:18:56.311422   17343 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0304 04:18:56.375734   17343 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0304 04:18:56.439015   17343 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0304 04:18:56.439090   17343 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0304 04:18:56.444516   17343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:18:56.508182   17343 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0304 04:18:57.673811   17343 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.16562025s)
	I0304 04:18:57.673886   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0304 04:18:57.679732   17343 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0304 04:18:57.686869   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0304 04:18:57.692148   17343 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0304 04:18:57.765724   17343 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0304 04:18:57.831788   17343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:18:57.895817   17343 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0304 04:18:57.903212   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0304 04:18:57.908509   17343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:18:57.979873   17343 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0304 04:18:58.023820   17343 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0304 04:18:58.023895   17343 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0304 04:18:58.026434   17343 start.go:543] Will wait 60s for crictl version
	I0304 04:18:58.026488   17343 ssh_runner.go:195] Run: which crictl
	I0304 04:18:58.028583   17343 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0304 04:18:58.045736   17343 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0304 04:18:58.045819   17343 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0304 04:18:58.065004   17343 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0304 04:18:57.922280   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:18:57.922402   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:18:57.933372   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:18:57.933444   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:18:57.944758   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:18:57.944830   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:18:57.955180   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:18:57.955257   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:18:57.966538   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:18:57.966610   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:18:57.978094   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:18:57.978166   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:18:57.993718   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:18:57.993785   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:18:58.004748   17177 logs.go:276] 0 containers: []
	W0304 04:18:58.004761   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:18:58.004817   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:18:58.016316   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:18:58.016332   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:18:58.016337   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:18:58.033836   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:18:58.033848   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:18:58.049143   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:18:58.049152   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:18:58.076073   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:18:58.076096   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:18:58.116201   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:18:58.116212   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:18:58.132094   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:18:58.132108   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:18:58.146255   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:18:58.146274   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:18:58.186005   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:18:58.186018   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:18:58.202639   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:18:58.202652   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:18:58.217804   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:18:58.217815   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:18:58.236703   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:18:58.236717   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:18:58.249740   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:18:58.249755   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:18:58.264324   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:18:58.264336   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:18:58.268941   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:18:58.268953   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:18:58.290004   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:18:58.290017   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:18:58.303091   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:18:58.303103   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:18:58.089864   17343 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0304 04:18:58.089940   17343 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0304 04:18:58.091793   17343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0304 04:18:58.096612   17343 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0304 04:18:58.096659   17343 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0304 04:18:58.108165   17343 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0304 04:18:58.108174   17343 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0304 04:18:58.108225   17343 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0304 04:18:58.112004   17343 ssh_runner.go:195] Run: which lz4
	I0304 04:18:58.113378   17343 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0304 04:18:58.114637   17343 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0304 04:18:58.114648   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0304 04:18:58.863743   17343 docker.go:649] Took 0.750397 seconds to copy over tarball
	I0304 04:18:58.863817   17343 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0304 04:19:00.825115   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:00.084137   17343 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.220303708s)
	I0304 04:19:00.084154   17343 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0304 04:19:00.101057   17343 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0304 04:19:00.104392   17343 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0304 04:19:00.109690   17343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:19:00.172332   17343 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0304 04:19:01.689812   17343 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.517470042s)
	I0304 04:19:01.689921   17343 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0304 04:19:01.703936   17343 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0304 04:19:01.703946   17343 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0304 04:19:01.703951   17343 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0304 04:19:01.744618   17343 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:19:01.745739   17343 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:19:01.745847   17343 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:19:01.746044   17343 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:19:01.746209   17343 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0304 04:19:01.746266   17343 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0304 04:19:01.747383   17343 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:19:01.747684   17343 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:19:01.758812   17343 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:19:01.758882   17343 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:19:01.761359   17343 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0304 04:19:01.762044   17343 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:19:01.762121   17343 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:19:01.762189   17343 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:19:01.762308   17343 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0304 04:19:01.762332   17343 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:19:03.674807   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:19:03.689736   17343 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0304 04:19:03.689768   17343 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:19:03.689834   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:19:03.701783   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0304 04:19:03.754111   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0304 04:19:03.766612   17343 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0304 04:19:03.766630   17343 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0304 04:19:03.766678   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0304 04:19:03.777775   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0304 04:19:03.792330   17343 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0304 04:19:03.792447   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:19:03.793526   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:19:03.798153   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0304 04:19:03.804014   17343 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0304 04:19:03.804036   17343 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:19:03.804086   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:19:03.808167   17343 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0304 04:19:03.808187   17343 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:19:03.808240   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:19:03.808958   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:19:03.820483   17343 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0304 04:19:03.820505   17343 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0304 04:19:03.820564   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0304 04:19:03.820666   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:19:03.822213   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0304 04:19:03.822302   17343 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0304 04:19:03.828578   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0304 04:19:03.852213   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0304 04:19:03.852231   17343 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0304 04:19:03.852251   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0304 04:19:03.852309   17343 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0304 04:19:03.852220   17343 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0304 04:19:03.852325   17343 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:19:03.852328   17343 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:19:03.852338   17343 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0304 04:19:03.852363   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:19:03.852366   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:19:03.879861   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0304 04:19:03.889412   17343 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0304 04:19:03.889434   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0304 04:19:03.889439   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0304 04:19:03.903613   17343 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0304 04:19:03.903626   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0304 04:19:03.940590   17343 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0304 04:19:03.940612   17343 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0304 04:19:03.940618   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0304 04:19:03.966652   17343 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0304 04:19:05.827666   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:05.827773   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:05.839799   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:05.839876   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:05.850442   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:05.850512   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:05.861480   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:05.861549   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:05.871807   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:05.871873   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:05.882161   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:05.882226   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:05.897275   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:05.897348   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:05.907721   17177 logs.go:276] 0 containers: []
	W0304 04:19:05.907732   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:05.907786   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:05.918251   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:05.918284   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:05.918292   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:05.956640   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:05.956650   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:05.973767   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:05.973776   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:05.984774   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:05.984784   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:06.009270   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:06.009284   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:06.050911   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:06.050922   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:06.066750   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:06.066759   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:06.081713   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:06.081724   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:06.093080   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:06.093093   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:06.104861   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:06.104874   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:06.116969   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:06.116983   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:06.130636   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:06.130647   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:06.151144   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:06.151158   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:06.169825   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:06.169843   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:06.187065   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:06.187075   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:06.191632   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:06.191641   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:08.712132   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0304 04:19:04.347356   17343 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0304 04:19:04.347919   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:19:04.387306   17343 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0304 04:19:04.387348   17343 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:19:04.387453   17343 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:19:04.413549   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0304 04:19:04.413688   17343 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0304 04:19:04.415743   17343 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0304 04:19:04.415759   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0304 04:19:04.445589   17343 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0304 04:19:04.445605   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0304 04:19:04.701490   17343 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0304 04:19:04.701530   17343 cache_images.go:92] LoadImages completed in 2.997589041s
	W0304 04:19:04.701573   17343 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0304 04:19:04.701634   17343 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0304 04:19:04.714704   17343 cni.go:84] Creating CNI manager for ""
	I0304 04:19:04.714717   17343 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:19:04.714725   17343 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0304 04:19:04.714734   17343 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-289000 NodeName:stopped-upgrade-289000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0304 04:19:04.714800   17343 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-289000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0304 04:19:04.714834   17343 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-289000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-289000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0304 04:19:04.714885   17343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0304 04:19:04.718045   17343 binaries.go:44] Found k8s binaries, skipping transfer
	I0304 04:19:04.718075   17343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0304 04:19:04.720602   17343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0304 04:19:04.725871   17343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0304 04:19:04.730680   17343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0304 04:19:04.736214   17343 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0304 04:19:04.737518   17343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0304 04:19:04.740850   17343 certs.go:56] Setting up /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000 for IP: 10.0.2.15
	I0304 04:19:04.740861   17343 certs.go:190] acquiring lock for shared ca certs: {Name:mk261f788a3b9cd088f9e587f9da53d875f26951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:19:04.740997   17343 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.key
	I0304 04:19:04.741322   17343 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/proxy-client-ca.key
	I0304 04:19:04.741597   17343 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.key
	I0304 04:19:04.741741   17343 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/apiserver.key.49504c3e
	I0304 04:19:04.741848   17343 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/proxy-client.key
	I0304 04:19:04.741986   17343 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/15486.pem (1338 bytes)
	W0304 04:19:04.742136   17343 certs.go:433] ignoring /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/15486_empty.pem, impossibly tiny 0 bytes
	I0304 04:19:04.742143   17343 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca-key.pem (1675 bytes)
	I0304 04:19:04.742179   17343 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem (1082 bytes)
	I0304 04:19:04.742199   17343 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem (1123 bytes)
	I0304 04:19:04.742225   17343 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/key.pem (1679 bytes)
	I0304 04:19:04.742265   17343 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/154862.pem (1708 bytes)
	I0304 04:19:04.742589   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0304 04:19:04.749530   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0304 04:19:04.756656   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0304 04:19:04.763529   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0304 04:19:04.770257   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0304 04:19:04.776937   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0304 04:19:04.783478   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0304 04:19:04.790449   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0304 04:19:04.797013   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/15486.pem --> /usr/share/ca-certificates/15486.pem (1338 bytes)
	I0304 04:19:04.803687   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/154862.pem --> /usr/share/ca-certificates/154862.pem (1708 bytes)
	I0304 04:19:04.810821   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0304 04:19:04.817603   17343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0304 04:19:04.822782   17343 ssh_runner.go:195] Run: openssl version
	I0304 04:19:04.824788   17343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15486.pem && ln -fs /usr/share/ca-certificates/15486.pem /etc/ssl/certs/15486.pem"
	I0304 04:19:04.828161   17343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15486.pem
	I0304 04:19:04.829656   17343 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Mar  4 12:05 /usr/share/ca-certificates/15486.pem
	I0304 04:19:04.829675   17343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15486.pem
	I0304 04:19:04.831370   17343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15486.pem /etc/ssl/certs/51391683.0"
	I0304 04:19:04.834728   17343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154862.pem && ln -fs /usr/share/ca-certificates/154862.pem /etc/ssl/certs/154862.pem"
	I0304 04:19:04.837705   17343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154862.pem
	I0304 04:19:04.839089   17343 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Mar  4 12:05 /usr/share/ca-certificates/154862.pem
	I0304 04:19:04.839109   17343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154862.pem
	I0304 04:19:04.841008   17343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154862.pem /etc/ssl/certs/3ec20f2e.0"
	I0304 04:19:04.844149   17343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0304 04:19:04.847534   17343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0304 04:19:04.849236   17343 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Mar  4 12:15 /usr/share/ca-certificates/minikubeCA.pem
	I0304 04:19:04.849261   17343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0304 04:19:04.851013   17343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0304 04:19:04.854121   17343 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0304 04:19:04.855563   17343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0304 04:19:04.858599   17343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0304 04:19:04.860559   17343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0304 04:19:04.863084   17343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0304 04:19:04.864951   17343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0304 04:19:04.866736   17343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0304 04:19:04.868665   17343 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52792 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 Clus
terName:stopped-upgrade-289000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0304 04:19:04.868729   17343 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0304 04:19:04.878587   17343 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0304 04:19:04.881580   17343 host.go:66] Checking if "stopped-upgrade-289000" exists ...
	I0304 04:19:04.882439   17343 main.go:141] libmachine: Using SSH client type: external
	I0304 04:19:04.882457   17343 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa (-rw-------)
	I0304 04:19:04.882474   17343 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa -p 52757] /usr/bin/ssh <nil>}
	I0304 04:19:04.882488   17343 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa -p 52757 -f -NTL 52792:localhost:8443
	I0304 04:19:04.927612   17343 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0304 04:19:04.927702   17343 kubeadm.go:636] restartCluster start
	I0304 04:19:04.927758   17343 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0304 04:19:04.931457   17343 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0304 04:19:04.931835   17343 kubeconfig.go:135] verify returned: extract IP: "stopped-upgrade-289000" does not appear in /Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:19:04.931938   17343 kubeconfig.go:146] "stopped-upgrade-289000" context is missing from /Users/jenkins/minikube-integration/18284-15061/kubeconfig - will repair!
	I0304 04:19:04.932180   17343 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/kubeconfig: {Name:mkd9e78edd5ce89511d1f03c76ad35ee3697edbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:19:04.932676   17343 kapi.go:59] client config for stopped-upgrade-289000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.key", CAFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10254f7d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0304 04:19:04.933184   17343 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0304 04:19:04.935919   17343 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-289000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0304 04:19:04.935924   17343 kubeadm.go:1135] stopping kube-system containers ...
	I0304 04:19:04.935959   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0304 04:19:04.946867   17343 docker.go:483] Stopping containers: [0c27c99061a8 331d1cec5665 68d9e42070f0 a8a74fac7389 375c7c379b12 1385b50317f7 97c67652317e a736c2fdf75e]
	I0304 04:19:04.946961   17343 ssh_runner.go:195] Run: docker stop 0c27c99061a8 331d1cec5665 68d9e42070f0 a8a74fac7389 375c7c379b12 1385b50317f7 97c67652317e a736c2fdf75e
	I0304 04:19:04.957821   17343 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0304 04:19:04.963948   17343 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0304 04:19:04.967218   17343 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0304 04:19:04.967245   17343 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0304 04:19:04.970064   17343 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0304 04:19:04.970069   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:19:04.997020   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:19:05.474561   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:19:05.621341   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:19:05.648177   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:19:05.674932   17343 api_server.go:52] waiting for apiserver process to appear ...
	I0304 04:19:05.674995   17343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:19:06.177107   17343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:19:06.677057   17343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:19:06.682915   17343 api_server.go:72] duration metric: took 1.007989875s to wait for apiserver process to appear ...
	I0304 04:19:06.682926   17343 api_server.go:88] waiting for apiserver healthz status ...
	I0304 04:19:06.682939   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:13.714299   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:13.714445   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:13.728103   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:13.728176   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:13.740314   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:13.740386   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:13.751047   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:13.751105   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:13.761721   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:13.761792   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:13.774193   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:13.774266   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:13.784687   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:13.784754   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:13.795416   17177 logs.go:276] 0 containers: []
	W0304 04:19:13.795427   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:13.795484   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:13.806010   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:13.806027   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:13.806033   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:13.818325   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:13.818339   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:13.857162   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:13.857172   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:13.869198   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:13.869211   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:13.883136   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:13.883147   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:13.895572   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:13.895581   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:13.914013   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:13.914023   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:13.931247   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:13.931256   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:13.942167   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:13.942179   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:13.978078   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:13.978087   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:13.992165   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:13.992174   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:14.006064   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:14.006073   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:14.019888   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:14.019899   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:14.024172   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:14.024180   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:14.041755   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:14.041766   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:14.057034   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:14.057045   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:11.684537   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:11.684580   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:16.582306   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:16.684876   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:16.684927   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:21.585012   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:21.585478   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:21.633235   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:21.633353   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:21.652134   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:21.652235   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:21.666024   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:21.666097   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:21.678126   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:21.678209   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:21.692824   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:21.692895   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:21.723211   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:21.723294   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:21.753298   17177 logs.go:276] 0 containers: []
	W0304 04:19:21.753312   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:21.753375   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:21.763745   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:21.763761   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:21.763767   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:21.778110   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:21.778123   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:21.795417   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:21.795429   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:21.813587   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:21.813597   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:21.825004   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:21.825016   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:21.849437   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:21.849449   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:21.853902   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:21.853911   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:21.865300   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:21.865311   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:21.879385   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:21.879396   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:21.917043   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:21.917056   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:21.939391   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:21.939401   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:21.956663   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:21.956673   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:21.967453   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:21.967465   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:21.979988   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:21.979999   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:21.991983   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:21.991994   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:22.004205   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:22.004219   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:21.685212   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:21.685245   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:24.544797   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:26.685542   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:26.685625   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:29.547190   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:29.547670   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:29.587646   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:29.587778   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:29.610008   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:29.610129   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:29.624818   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:29.624897   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:29.638452   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:29.638520   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:29.649462   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:29.649524   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:29.660566   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:29.660632   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:29.670460   17177 logs.go:276] 0 containers: []
	W0304 04:19:29.670471   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:29.670528   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:29.681027   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:29.681042   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:29.681048   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:29.715755   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:29.715769   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:29.737586   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:29.737598   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:29.751786   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:29.751796   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:29.769201   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:29.769212   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:29.775637   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:29.775649   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:29.794049   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:29.794093   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:29.806341   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:29.806352   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:29.828985   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:29.828992   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:29.868679   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:29.868693   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:29.882914   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:29.882927   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:29.894732   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:29.894744   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:29.911122   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:29.911132   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:29.923218   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:29.923230   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:29.937683   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:29.937698   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:29.948782   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:29.948792   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:32.462486   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:31.686263   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:31.686347   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:37.465153   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:37.465569   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:37.500350   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:37.500486   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:37.520807   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:37.520950   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:37.535276   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:37.535350   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:37.547635   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:37.547708   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:37.557879   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:37.557946   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:37.572652   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:37.572716   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:37.583407   17177 logs.go:276] 0 containers: []
	W0304 04:19:37.583424   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:37.583486   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:37.593838   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:37.593856   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:37.593862   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:37.629461   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:37.629472   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:37.643630   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:37.643639   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:37.656080   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:37.656092   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:37.678919   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:37.678928   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:37.690915   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:37.690929   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:37.695309   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:37.695317   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:37.714363   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:37.714374   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:37.735618   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:37.735630   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:37.747597   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:37.747610   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:37.761645   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:37.761657   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:37.801226   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:37.801236   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:37.812996   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:37.813010   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:37.830209   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:37.830222   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:37.846720   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:37.846732   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:37.858282   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:37.858293   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:36.687385   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:36.687471   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:40.374355   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:41.689077   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:41.689161   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:45.376738   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:45.377191   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:45.415656   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:45.415798   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:45.436555   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:45.436652   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:45.451614   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:45.451689   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:45.464599   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:45.464675   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:45.479605   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:45.479705   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:45.490564   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:45.490635   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:45.501386   17177 logs.go:276] 0 containers: []
	W0304 04:19:45.501397   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:45.501456   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:45.512019   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:45.512039   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:45.512045   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:45.516617   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:45.516627   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:45.535204   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:45.535216   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:45.552850   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:45.552861   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:45.570949   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:45.570965   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:45.588238   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:45.588248   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:45.600107   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:45.600121   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:45.637923   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:45.637931   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:45.649594   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:45.649606   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:45.661359   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:45.661371   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:45.675474   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:45.675484   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:45.689594   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:45.689604   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:45.703366   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:45.703375   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:45.715296   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:45.715309   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:45.739309   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:45.739326   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:45.775428   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:45.775440   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:48.289619   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:46.690615   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:46.690685   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:53.291907   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:53.292038   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:19:53.306326   17177 logs.go:276] 2 containers: [5ce7c31b6cbb 440178f351a5]
	I0304 04:19:53.306409   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:19:53.318866   17177 logs.go:276] 2 containers: [9f1c1a879f7e b74f2799ab14]
	I0304 04:19:53.318939   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:19:53.330029   17177 logs.go:276] 1 containers: [9e28c57f2a84]
	I0304 04:19:53.330106   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:19:53.340637   17177 logs.go:276] 2 containers: [7ca9d2da82e9 e9016c04a2c2]
	I0304 04:19:53.340708   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:19:53.351246   17177 logs.go:276] 1 containers: [08cd543c4e26]
	I0304 04:19:53.351312   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:19:53.361638   17177 logs.go:276] 2 containers: [053a16c423eb 42712b1ea980]
	I0304 04:19:53.361706   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:19:53.377471   17177 logs.go:276] 0 containers: []
	W0304 04:19:53.377484   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:19:53.377545   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:19:53.387639   17177 logs.go:276] 1 containers: [15cece66282a]
	I0304 04:19:53.387659   17177 logs.go:123] Gathering logs for kube-apiserver [5ce7c31b6cbb] ...
	I0304 04:19:53.387664   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ce7c31b6cbb"
	I0304 04:19:53.402340   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:19:53.402350   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:19:53.426266   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:19:53.426280   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:19:53.461563   17177 logs.go:123] Gathering logs for kube-scheduler [7ca9d2da82e9] ...
	I0304 04:19:53.461576   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ca9d2da82e9"
	I0304 04:19:53.473456   17177 logs.go:123] Gathering logs for kube-proxy [08cd543c4e26] ...
	I0304 04:19:53.473468   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08cd543c4e26"
	I0304 04:19:53.485636   17177 logs.go:123] Gathering logs for kube-controller-manager [053a16c423eb] ...
	I0304 04:19:53.485650   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 053a16c423eb"
	I0304 04:19:53.502504   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:19:53.502517   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:19:53.517918   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:19:53.517929   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:19:53.557742   17177 logs.go:123] Gathering logs for kube-apiserver [440178f351a5] ...
	I0304 04:19:53.557755   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440178f351a5"
	I0304 04:19:53.577009   17177 logs.go:123] Gathering logs for kube-controller-manager [42712b1ea980] ...
	I0304 04:19:53.577019   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42712b1ea980"
	I0304 04:19:53.588514   17177 logs.go:123] Gathering logs for storage-provisioner [15cece66282a] ...
	I0304 04:19:53.588527   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15cece66282a"
	I0304 04:19:53.600042   17177 logs.go:123] Gathering logs for kube-scheduler [e9016c04a2c2] ...
	I0304 04:19:53.600055   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9016c04a2c2"
	I0304 04:19:53.616159   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:19:53.616173   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:19:53.620643   17177 logs.go:123] Gathering logs for etcd [9f1c1a879f7e] ...
	I0304 04:19:53.620660   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f1c1a879f7e"
	I0304 04:19:53.635978   17177 logs.go:123] Gathering logs for etcd [b74f2799ab14] ...
	I0304 04:19:53.635989   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f2799ab14"
	I0304 04:19:53.653957   17177 logs.go:123] Gathering logs for coredns [9e28c57f2a84] ...
	I0304 04:19:53.653970   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e28c57f2a84"
	I0304 04:19:51.692742   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:51.692829   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:56.169160   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:56.695189   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:56.695310   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:01.171520   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:01.171681   17177 kubeadm.go:640] restartCluster took 4m4.26874775s
	W0304 04:20:01.171828   17177 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I0304 04:20:01.171879   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0304 04:20:02.198529   17177 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.026644083s)
	I0304 04:20:02.198594   17177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0304 04:20:02.203547   17177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0304 04:20:02.206565   17177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0304 04:20:02.209274   17177 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0304 04:20:02.209289   17177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0304 04:20:02.227194   17177 kubeadm.go:322] [init] Using Kubernetes version: v1.24.1
	I0304 04:20:02.227225   17177 kubeadm.go:322] [preflight] Running pre-flight checks
	I0304 04:20:02.285936   17177 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0304 04:20:02.285994   17177 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0304 04:20:02.286048   17177 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0304 04:20:02.335505   17177 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0304 04:20:02.343595   17177 out.go:204]   - Generating certificates and keys ...
	I0304 04:20:02.343630   17177 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0304 04:20:02.343672   17177 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0304 04:20:02.343710   17177 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0304 04:20:02.343740   17177 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0304 04:20:02.343791   17177 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0304 04:20:02.343817   17177 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0304 04:20:02.343853   17177 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0304 04:20:02.343886   17177 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0304 04:20:02.343925   17177 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0304 04:20:02.343964   17177 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0304 04:20:02.343984   17177 kubeadm.go:322] [certs] Using the existing "sa" key
	I0304 04:20:02.344013   17177 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0304 04:20:02.477940   17177 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0304 04:20:02.628673   17177 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0304 04:20:02.770217   17177 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0304 04:20:03.008133   17177 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0304 04:20:03.038211   17177 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0304 04:20:03.038265   17177 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0304 04:20:03.038287   17177 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0304 04:20:03.122823   17177 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0304 04:20:03.128862   17177 out.go:204]   - Booting up control plane ...
	I0304 04:20:03.128910   17177 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0304 04:20:03.128944   17177 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0304 04:20:03.128986   17177 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0304 04:20:03.129021   17177 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0304 04:20:03.129093   17177 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0304 04:20:01.697823   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:01.697845   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:07.630303   17177 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.502751 seconds
	I0304 04:20:07.630439   17177 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0304 04:20:07.635499   17177 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0304 04:20:08.144593   17177 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0304 04:20:08.144741   17177 kubeadm.go:322] [mark-control-plane] Marking the node running-upgrade-156000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0304 04:20:08.650069   17177 kubeadm.go:322] [bootstrap-token] Using token: 7z17se.iszmbsipe7dpw0nb
	I0304 04:20:08.658964   17177 out.go:204]   - Configuring RBAC rules ...
	I0304 04:20:08.659055   17177 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0304 04:20:08.659113   17177 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0304 04:20:08.663959   17177 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0304 04:20:08.664858   17177 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0304 04:20:08.665864   17177 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0304 04:20:08.667154   17177 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0304 04:20:08.670847   17177 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0304 04:20:08.847107   17177 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0304 04:20:09.057318   17177 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0304 04:20:09.057827   17177 kubeadm.go:322] 
	I0304 04:20:09.057866   17177 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0304 04:20:09.057871   17177 kubeadm.go:322] 
	I0304 04:20:09.057915   17177 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0304 04:20:09.057920   17177 kubeadm.go:322] 
	I0304 04:20:09.057932   17177 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0304 04:20:09.057961   17177 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0304 04:20:09.057986   17177 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0304 04:20:09.057989   17177 kubeadm.go:322] 
	I0304 04:20:09.058017   17177 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0304 04:20:09.058021   17177 kubeadm.go:322] 
	I0304 04:20:09.058049   17177 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0304 04:20:09.058052   17177 kubeadm.go:322] 
	I0304 04:20:09.058079   17177 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0304 04:20:09.058118   17177 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0304 04:20:09.058163   17177 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0304 04:20:09.058168   17177 kubeadm.go:322] 
	I0304 04:20:09.058209   17177 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0304 04:20:09.058245   17177 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0304 04:20:09.058250   17177 kubeadm.go:322] 
	I0304 04:20:09.058293   17177 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7z17se.iszmbsipe7dpw0nb \
	I0304 04:20:09.058342   17177 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d9011201d4995caae6ff8661400631de0c6362c7df9a896efc3c38706beefef \
	I0304 04:20:09.058352   17177 kubeadm.go:322] 	--control-plane 
	I0304 04:20:09.058357   17177 kubeadm.go:322] 
	I0304 04:20:09.058407   17177 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0304 04:20:09.058411   17177 kubeadm.go:322] 
	I0304 04:20:09.058451   17177 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7z17se.iszmbsipe7dpw0nb \
	I0304 04:20:09.058503   17177 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d9011201d4995caae6ff8661400631de0c6362c7df9a896efc3c38706beefef 
	I0304 04:20:09.058647   17177 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0304 04:20:09.058656   17177 cni.go:84] Creating CNI manager for ""
	I0304 04:20:09.058663   17177 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:20:09.066574   17177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0304 04:20:09.070600   17177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0304 04:20:09.074523   17177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0304 04:20:09.079991   17177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0304 04:20:09.080042   17177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0304 04:20:09.080099   17177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ab57ba9f65fd4cb3ac8815e4f9baeeca5604e645 minikube.k8s.io/name=running-upgrade-156000 minikube.k8s.io/updated_at=2024_03_04T04_20_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0304 04:20:09.116659   17177 kubeadm.go:1088] duration metric: took 36.659416ms to wait for elevateKubeSystemPrivileges.
	I0304 04:20:09.116665   17177 ops.go:34] apiserver oom_adj: -16
	I0304 04:20:09.130813   17177 host.go:66] Checking if "running-upgrade-156000" exists ...
	I0304 04:20:09.131783   17177 main.go:141] libmachine: Using SSH client type: external
	I0304 04:20:09.131798   17177 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa (-rw-------)
	I0304 04:20:09.131813   17177 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa -p 52560] /usr/bin/ssh <nil>}
	I0304 04:20:09.131826   17177 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa -p 52560 -f -NTL 52592:localhost:8443
	I0304 04:20:09.168060   17177 kubeadm.go:406] StartCluster complete in 4m12.317517542s
	I0304 04:20:09.168107   17177 settings.go:142] acquiring lock: {Name:mk5ed2e5b4fa3bf37e56838441d7d3c0b1b72b0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:20:09.168272   17177 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:20:09.168831   17177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/kubeconfig: {Name:mkd9e78edd5ce89511d1f03c76ad35ee3697edbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:20:09.169098   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0304 04:20:09.169172   17177 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0304 04:20:09.169229   17177 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-156000"
	I0304 04:20:09.169246   17177 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-156000"
	W0304 04:20:09.169248   17177 addons.go:243] addon storage-provisioner should already be in state true
	I0304 04:20:09.169252   17177 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-156000"
	I0304 04:20:09.169267   17177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-156000"
	I0304 04:20:09.169281   17177 host.go:66] Checking if "running-upgrade-156000" exists ...
	I0304 04:20:09.169339   17177 config.go:182] Loaded profile config "running-upgrade-156000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:20:09.169438   17177 kapi.go:59] client config for running-upgrade-156000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/client.key", CAFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1038e77d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0304 04:20:09.170362   17177 kapi.go:59] client config for running-upgrade-156000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/running-upgrade-156000/client.key", CAFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1038e77d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0304 04:20:09.170460   17177 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-156000"
	W0304 04:20:09.170465   17177 addons.go:243] addon default-storageclass should already be in state true
	I0304 04:20:09.170473   17177 host.go:66] Checking if "running-upgrade-156000" exists ...
	I0304 04:20:09.174451   17177 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:20:06.700036   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:06.700280   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:06.724426   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:06.724597   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:06.742155   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:06.742248   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:06.754704   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:06.754775   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:06.765876   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:06.765959   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:06.775970   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:06.776041   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:06.786123   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:06.786202   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:06.796059   17343 logs.go:276] 0 containers: []
	W0304 04:20:06.796075   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:06.796132   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:06.807337   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:06.807355   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:06.807361   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:06.819177   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:06.819188   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:06.834727   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:06.834740   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:06.846610   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:06.846623   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:06.870272   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:06.870282   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:06.885098   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:06.885108   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:06.889192   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:06.889200   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:06.903252   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:06.903262   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:06.945597   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:06.945610   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:06.961200   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:06.961217   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:06.972497   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:06.972511   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:06.986770   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:06.986780   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:07.099948   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:07.099962   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:07.115275   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:07.115287   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:07.132515   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:07.132531   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:07.153557   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:07.153569   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:07.166416   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:07.166429   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:09.178562   17177 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0304 04:20:09.178570   17177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0304 04:20:09.178591   17177 sshutil.go:53] new ssh client: &{IP:localhost Port:52560 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa Username:docker}
	I0304 04:20:09.179454   17177 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0304 04:20:09.179461   17177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0304 04:20:09.179466   17177 sshutil.go:53] new ssh client: &{IP:localhost Port:52560 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/running-upgrade-156000/id_rsa Username:docker}
	I0304 04:20:09.199928   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           10.0.2.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0304 04:20:09.216921   17177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0304 04:20:09.250548   17177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0304 04:20:09.596804   17177 start.go:929] {"host.minikube.internal": 10.0.2.2} host record injected into CoreDNS's ConfigMap
	I0304 04:20:09.685060   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:14.685486   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:14.685713   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:14.710613   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:14.710727   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:14.726975   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:14.727067   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:14.739844   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:14.739908   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:14.751244   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:14.751328   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:14.761555   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:14.761623   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:14.772432   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:14.772501   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:14.782974   17343 logs.go:276] 0 containers: []
	W0304 04:20:14.782986   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:14.783037   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:14.794285   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:14.794308   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:14.794315   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:14.830449   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:14.830460   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:14.845216   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:14.845229   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:14.859539   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:14.859550   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:14.864378   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:14.864385   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:14.878104   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:14.878114   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:14.895126   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:14.895135   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:14.912583   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:14.912594   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:14.926662   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:14.926677   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:14.938768   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:14.938778   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:14.953414   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:14.953420   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:14.991109   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:14.991120   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:15.008097   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:15.008107   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:15.025700   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:15.025711   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:15.037067   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:15.037077   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:15.048217   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:15.048228   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:15.064311   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:15.064322   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:17.590257   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:22.592868   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:22.593033   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:22.604337   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:22.604412   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:22.615165   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:22.615238   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:22.625435   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:22.625511   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:22.636267   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:22.636352   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:22.646695   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:22.646767   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:22.657639   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:22.657708   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:22.668411   17343 logs.go:276] 0 containers: []
	W0304 04:20:22.668481   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:22.668559   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:22.679428   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:22.679445   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:22.679450   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:22.683999   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:22.684006   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:22.723605   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:22.723616   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:22.740702   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:22.740713   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:22.752595   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:22.752618   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:22.777808   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:22.777819   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:22.790553   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:22.790573   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:22.809522   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:22.809544   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:22.861492   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:22.861507   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:22.881676   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:22.881687   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:22.897697   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:22.897711   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:22.912824   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:22.912840   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:22.929513   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:22.929528   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:22.942750   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:22.942762   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:22.955111   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:22.955123   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:22.967640   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:22.967653   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:22.992035   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:22.992048   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:25.511299   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:30.513536   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:30.513696   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:30.526100   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:30.526186   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:30.541971   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:30.542069   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:30.551884   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:30.551954   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:30.562914   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:30.562991   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:30.572898   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:30.572961   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:30.586394   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:30.586460   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:30.595945   17343 logs.go:276] 0 containers: []
	W0304 04:20:30.595957   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:30.596022   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:30.609931   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:30.609947   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:30.609953   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:30.626780   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:30.626789   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:30.665614   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:30.665631   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:30.688725   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:30.688735   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:30.703836   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:30.703848   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:30.721755   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:30.721774   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:30.733924   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:30.733934   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:30.767825   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:30.767837   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:30.779964   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:30.779979   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:30.792115   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:30.792124   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:30.807752   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:30.807765   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:30.822499   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:30.822511   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:30.840764   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:30.840776   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:30.859359   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:30.859378   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:30.874254   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:30.874270   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:30.891262   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:30.891273   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:30.916491   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:30.916500   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:33.422918   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0304 04:20:39.171810   17177 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "running-upgrade-156000" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	E0304 04:20:39.171823   17177 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	I0304 04:20:39.171834   17177 start.go:223] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:20:39.176151   17177 out.go:177] * Verifying Kubernetes components...
	I0304 04:20:38.425299   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:38.425780   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:38.455390   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:38.455547   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:38.473159   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:38.473253   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:38.486315   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:38.486391   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:38.498271   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:38.498345   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:38.508447   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:38.508526   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:38.522690   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:38.522769   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:38.532716   17343 logs.go:276] 0 containers: []
	W0304 04:20:38.532726   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:38.532780   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:38.548246   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:38.548271   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:38.548277   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:38.564419   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:38.564428   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:38.586806   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:38.586817   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:38.598345   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:38.598356   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:38.615855   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:38.615864   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:38.630638   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:38.630649   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:38.634870   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:38.634878   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:38.670459   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:38.670470   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:38.686062   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:38.686071   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:38.725108   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:38.725121   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:38.739758   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:38.739771   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:38.750633   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:38.750644   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:38.764928   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:38.764938   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:38.776414   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:38.776425   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:38.793587   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:38.793600   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:38.804733   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:38.804752   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:38.816254   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:38.816267   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:39.182171   17177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0304 04:20:39.187313   17177 api_server.go:52] waiting for apiserver process to appear ...
	I0304 04:20:39.187358   17177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:20:39.191826   17177 api_server.go:72] duration metric: took 19.9805ms to wait for apiserver process to appear ...
	I0304 04:20:39.191834   17177 api_server.go:88] waiting for apiserver healthz status ...
	I0304 04:20:39.191841   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0304 04:20:39.623578   17177 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0304 04:20:39.627947   17177 out.go:177] * Enabled addons: storage-provisioner
	I0304 04:20:39.641850   17177 addons.go:505] enable addons completed in 30.472883084s: enabled=[storage-provisioner]
	I0304 04:20:41.342492   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:44.193918   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:44.193979   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:46.344856   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:46.344983   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:46.359515   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:46.359594   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:46.374788   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:46.374854   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:46.385995   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:46.386070   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:46.396230   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:46.396310   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:46.406741   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:46.406799   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:46.417131   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:46.417186   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:46.430932   17343 logs.go:276] 0 containers: []
	W0304 04:20:46.430944   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:46.431001   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:46.441294   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:46.441310   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:46.441316   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:46.476751   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:46.476764   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:46.494388   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:46.494400   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:46.531440   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:46.531453   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:46.548908   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:46.548921   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:46.565459   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:46.565471   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:46.583313   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:46.583324   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:46.599108   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:46.599115   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:46.616244   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:46.616255   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:46.627414   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:46.627425   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:46.639074   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:46.639083   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:46.658559   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:46.658570   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:46.678397   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:46.678408   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:46.695511   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:46.695522   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:46.718824   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:46.718834   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:46.732743   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:46.732756   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:46.747383   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:46.747395   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:49.252529   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:49.194402   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:49.194436   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:54.254702   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:54.254866   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:54.275506   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:54.275613   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:54.194847   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:54.194891   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:54.290048   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:54.290124   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:54.310353   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:54.310428   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:54.320973   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:54.321044   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:54.331345   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:54.331412   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:54.341959   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:54.342036   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:54.352005   17343 logs.go:276] 0 containers: []
	W0304 04:20:54.352014   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:54.352065   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:54.362341   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:54.362359   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:54.362365   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:54.406261   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:54.406272   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:54.421026   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:54.421037   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:54.432359   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:54.432372   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:54.449906   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:54.449920   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:54.470179   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:54.470193   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:54.493198   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:54.493205   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:54.504750   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:54.504763   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:54.519573   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:54.519580   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:54.555107   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:54.555120   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:54.570991   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:54.571002   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:54.582550   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:54.582562   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:54.593784   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:54.593794   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:54.598187   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:54.598193   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:54.613111   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:54.613120   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:54.629854   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:54.629864   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:54.648377   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:54.648390   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:57.162368   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:59.195865   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:59.195905   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:02.164724   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:02.165080   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:02.197307   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:02.197434   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:02.214103   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:02.214183   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:02.229999   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:02.230070   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:02.241725   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:02.241797   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:02.252346   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:02.252412   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:02.265855   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:02.265923   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:02.278055   17343 logs.go:276] 0 containers: []
	W0304 04:21:02.278066   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:02.278123   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:02.289921   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:02.289942   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:02.289948   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:02.305306   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:02.305317   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:02.316407   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:02.316418   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:02.353108   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:02.353119   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:02.367440   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:02.367451   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:02.381191   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:02.381203   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:02.395823   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:02.395836   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:02.415315   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:02.415327   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:02.426801   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:02.426812   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:02.452256   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:02.452266   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:02.467204   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:02.467213   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:02.471493   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:02.471502   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:02.490918   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:02.490928   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:02.502258   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:02.502269   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:02.518327   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:02.518338   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:02.554169   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:02.554180   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:02.570205   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:02.570216   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:04.196816   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:04.196857   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:05.082968   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:09.198234   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:09.198276   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:10.084725   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:10.084948   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:10.108024   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:10.108127   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:10.123728   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:10.123817   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:10.136587   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:10.136654   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:10.148357   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:10.148444   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:10.159183   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:10.159252   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:10.169863   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:10.169939   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:10.180289   17343 logs.go:276] 0 containers: []
	W0304 04:21:10.180300   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:10.180359   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:10.190878   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:10.190894   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:10.190900   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:10.202758   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:10.202769   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:10.225981   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:10.225988   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:10.241103   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:10.241114   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:10.252344   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:10.252356   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:10.269385   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:10.269395   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:10.281458   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:10.281468   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:10.296407   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:10.296413   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:10.310874   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:10.310888   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:10.349128   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:10.349144   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:10.363874   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:10.363885   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:10.381993   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:10.382009   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:10.393245   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:10.393256   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:10.409110   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:10.409121   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:10.413920   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:10.413929   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:10.448899   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:10.448910   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:10.464101   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:10.464112   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:12.987045   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:14.199642   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:14.199685   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:17.989417   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:17.989631   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:18.010767   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:18.010881   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:18.024801   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:18.024879   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:18.037018   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:18.037097   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:18.047836   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:18.047906   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:18.061756   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:18.061821   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:18.072307   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:18.072374   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:18.082167   17343 logs.go:276] 0 containers: []
	W0304 04:21:18.082179   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:18.082249   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:18.093471   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:18.093487   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:18.093492   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:18.107875   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:18.107889   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:18.124337   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:18.124348   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:18.136335   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:18.136346   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:18.153629   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:18.153639   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:18.164786   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:18.164797   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:18.180634   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:18.180644   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:18.215949   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:18.215964   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:18.230612   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:18.230623   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:18.269766   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:18.269778   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:18.281304   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:18.281316   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:18.303966   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:18.303974   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:18.307877   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:18.307883   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:18.319028   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:18.319043   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:18.333547   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:18.333557   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:18.344773   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:18.344782   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:18.361494   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:18.361510   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:19.201407   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:19.201428   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:20.881488   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:24.203549   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:24.203572   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:25.883517   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:25.883747   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:25.905915   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:25.906024   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:25.919531   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:25.919610   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:25.931633   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:25.931695   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:25.946022   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:25.946105   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:25.959211   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:25.959285   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:25.970168   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:25.970238   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:25.980688   17343 logs.go:276] 0 containers: []
	W0304 04:21:25.980701   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:25.980756   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:25.997517   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:25.997534   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:25.997540   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:26.011650   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:26.011665   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:26.055201   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:26.055212   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:26.067104   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:26.067115   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:26.082214   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:26.082224   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:26.097882   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:26.097890   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:26.111829   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:26.111840   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:26.123691   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:26.123703   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:26.140738   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:26.140748   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:26.153073   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:26.153087   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:26.164863   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:26.164873   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:26.188751   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:26.188759   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:26.203699   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:26.203709   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:26.224072   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:26.224081   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:26.236278   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:26.236288   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:26.240838   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:26.240847   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:26.274561   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:26.274571   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:28.791279   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:29.205747   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:29.205789   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:33.793544   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:33.793709   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:33.813393   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:33.813501   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:33.827880   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:33.827954   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:33.842333   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:33.842401   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:33.853508   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:33.853591   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:33.864399   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:33.864468   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:33.875238   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:33.875305   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:33.885927   17343 logs.go:276] 0 containers: []
	W0304 04:21:33.885939   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:33.885996   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:33.896548   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:33.896564   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:33.896570   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:33.913364   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:33.913377   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:33.928665   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:33.928676   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:33.933212   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:33.933223   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:33.972527   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:33.972539   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:33.987399   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:33.987410   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:33.998526   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:33.998536   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:34.034144   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:34.034155   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:34.046106   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:34.046120   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:34.058291   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:34.058303   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:34.071202   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:34.071212   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:34.086195   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:34.086206   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:34.111723   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:34.111735   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:34.123657   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:34.123670   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:34.138660   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:34.138668   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:34.152588   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:34.152598   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:34.170966   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:34.170976   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:34.208019   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:34.208056   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:36.697388   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:39.210291   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:39.210458   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:39.230659   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:21:39.230761   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:39.245443   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:21:39.245525   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:39.257376   17177 logs.go:276] 2 containers: [bb28ce9fb69b d7ccee857a9c]
	I0304 04:21:39.257451   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:39.268026   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:21:39.268109   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:39.278094   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:21:39.278158   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:39.288615   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:21:39.288680   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:39.298559   17177 logs.go:276] 0 containers: []
	W0304 04:21:39.298570   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:39.298641   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:39.308701   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:21:39.308718   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:21:39.308723   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:21:39.320375   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:21:39.320388   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:21:39.335532   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:21:39.335541   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:21:39.346751   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:39.346763   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:39.386624   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:39.386635   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:39.390883   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:21:39.390891   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:21:39.404936   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:21:39.404947   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:21:39.421516   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:21:39.421526   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:21:39.433522   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:21:39.433531   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:21:39.444971   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:39.444984   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:39.469341   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:39.469351   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:39.509114   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:21:39.509126   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:21:39.526459   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:21:39.526469   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:42.040123   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:41.699636   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:41.699855   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:41.731911   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:41.732003   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:41.747310   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:41.747388   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:41.759563   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:41.759635   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:41.770138   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:41.770210   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:41.780204   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:41.780283   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:41.790891   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:41.790967   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:41.809830   17343 logs.go:276] 0 containers: []
	W0304 04:21:41.809842   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:41.809900   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:41.820305   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:41.820325   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:41.820331   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:41.855578   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:41.855590   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:41.867332   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:41.867347   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:41.878862   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:41.878875   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:41.883497   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:41.883504   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:41.920560   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:41.920571   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:41.938449   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:41.938464   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:41.953944   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:41.953954   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:41.965531   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:41.965540   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:41.988266   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:41.988281   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:42.003278   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:42.003290   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:42.025693   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:42.025704   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:42.040008   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:42.040021   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:42.050858   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:42.050871   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:42.070011   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:42.070025   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:42.088279   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:42.088290   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:42.103198   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:42.103208   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:47.042330   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:47.042472   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:47.060024   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:21:47.060163   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:47.074427   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:21:47.074501   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:47.085964   17177 logs.go:276] 2 containers: [bb28ce9fb69b d7ccee857a9c]
	I0304 04:21:47.086030   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:47.095983   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:21:47.096051   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:47.109303   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:21:47.109366   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:47.119785   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:21:47.119856   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:47.130304   17177 logs.go:276] 0 containers: []
	W0304 04:21:47.130319   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:47.130393   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:47.141117   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:21:47.141134   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:47.141139   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:47.179650   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:47.179659   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:47.218712   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:21:47.218723   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:21:47.233074   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:21:47.233085   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:21:47.252406   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:21:47.252417   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:21:47.265061   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:21:47.265073   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:47.276512   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:47.276522   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:47.299932   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:47.299940   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:47.304692   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:21:47.304702   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:21:47.316662   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:21:47.316670   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:21:47.329433   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:21:47.329442   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:21:47.343736   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:21:47.343746   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:21:47.355687   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:21:47.355698   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:21:44.617407   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:49.875334   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:49.620073   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:49.620273   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:49.646570   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:49.646700   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:49.663445   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:49.663527   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:49.679726   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:49.679801   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:49.691790   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:49.691865   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:49.703770   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:49.703841   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:49.719942   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:49.720015   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:49.734343   17343 logs.go:276] 0 containers: []
	W0304 04:21:49.734355   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:49.734416   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:49.745175   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:49.745192   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:49.745197   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:49.762328   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:49.762341   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:49.776478   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:49.776491   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:49.797783   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:49.797798   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:49.812842   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:49.812853   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:49.823718   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:49.823732   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:49.828361   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:49.828367   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:49.862586   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:49.862601   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:49.899702   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:49.899711   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:49.923964   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:49.923975   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:49.935666   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:49.935676   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:49.949182   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:49.949192   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:49.960557   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:49.960566   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:49.982999   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:49.983008   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:49.994618   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:49.994630   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:50.009209   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:50.009215   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:50.023537   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:50.023548   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:52.543170   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:54.877505   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:54.877692   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:54.888976   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:21:54.889063   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:54.904573   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:21:54.904635   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:54.915513   17177 logs.go:276] 2 containers: [bb28ce9fb69b d7ccee857a9c]
	I0304 04:21:54.915590   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:54.929354   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:21:54.929424   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:54.940291   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:21:54.940367   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:54.952551   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:21:54.952631   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:54.963654   17177 logs.go:276] 0 containers: []
	W0304 04:21:54.963665   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:54.963724   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:54.978988   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:21:54.979002   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:21:54.979009   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:21:54.990357   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:54.990368   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:54.994560   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:54.994567   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:55.030121   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:21:55.030134   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:21:55.044999   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:21:55.045011   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:21:55.060009   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:21:55.060020   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:21:55.072208   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:21:55.072221   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:21:55.086815   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:21:55.086825   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:21:55.098887   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:55.098898   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:55.137178   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:21:55.137187   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:21:55.149541   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:21:55.149550   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:21:55.167766   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:55.167776   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:55.192221   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:21:55.192227   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:57.706522   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:57.545910   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:57.546217   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:57.580374   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:57.580506   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:57.598884   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:57.598975   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:57.615837   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:57.615914   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:57.627486   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:57.627558   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:57.637739   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:57.637811   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:57.648631   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:57.648703   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:57.658964   17343 logs.go:276] 0 containers: []
	W0304 04:21:57.658977   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:57.659034   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:57.669655   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:57.669672   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:57.669678   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:57.685001   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:57.685012   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:57.696943   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:57.696957   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:57.708618   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:57.708628   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:57.724432   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:57.724446   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:57.741878   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:57.741895   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:57.754010   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:57.754023   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:57.772134   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:57.772147   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:57.788497   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:57.788508   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:57.825168   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:57.825180   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:57.862511   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:57.862522   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:57.878412   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:57.878425   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:57.893063   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:57.893073   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:57.908993   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:57.909007   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:57.920508   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:57.920520   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:57.944008   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:57.944017   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:57.948451   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:57.948458   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:02.708766   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:02.709163   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:02.740751   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:02.740899   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:02.760709   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:02.760808   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:02.775606   17177 logs.go:276] 2 containers: [bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:02.775685   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:02.789137   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:02.789214   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:02.800083   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:02.800156   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:02.813836   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:02.813906   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:02.824163   17177 logs.go:276] 0 containers: []
	W0304 04:22:02.824173   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:02.824228   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:02.835011   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:02.835025   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:02.835030   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:02.852286   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:02.852297   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:02.863966   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:02.863976   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:02.869121   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:02.869131   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:02.905102   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:02.905113   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:02.927147   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:02.927159   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:02.939313   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:02.939326   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:02.955770   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:02.955782   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:02.967590   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:02.967598   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:02.990969   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:02.990977   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:03.002698   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:03.002711   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:03.040448   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:03.040456   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:03.062085   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:03.062097   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:00.464743   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:05.577898   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:05.467160   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:05.467343   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:05.485252   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:05.485339   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:05.498636   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:05.498705   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:05.510143   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:05.510212   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:05.522069   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:05.522134   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:05.532277   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:05.532338   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:05.542884   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:05.542953   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:05.553260   17343 logs.go:276] 0 containers: []
	W0304 04:22:05.553276   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:05.553339   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:05.571337   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:05.571356   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:05.571364   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:05.575976   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:05.575985   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:05.611025   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:05.611036   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:05.624668   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:05.624679   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:05.638909   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:05.638921   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:05.661297   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:05.661307   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:05.703332   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:05.703343   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:05.718385   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:05.718397   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:05.731008   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:05.731019   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:05.742479   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:05.742491   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:05.759241   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:05.759252   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:05.774269   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:05.774282   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:05.785926   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:05.785938   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:05.801159   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:05.801172   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:05.813323   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:05.813335   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:05.827880   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:05.827888   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:05.841344   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:05.841356   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:08.358735   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:10.580141   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:10.580554   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:10.612808   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:10.612964   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:10.631809   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:10.631906   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:10.645692   17177 logs.go:276] 2 containers: [bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:10.645769   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:10.657843   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:10.657916   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:10.668538   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:10.668614   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:10.679216   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:10.679283   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:10.689386   17177 logs.go:276] 0 containers: []
	W0304 04:22:10.689408   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:10.689473   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:10.700002   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:10.700019   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:10.700025   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:10.723867   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:10.723877   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:10.735228   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:10.735239   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:10.739942   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:10.739951   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:10.775456   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:10.775466   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:10.794125   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:10.794135   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:10.806856   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:10.806870   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:10.819145   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:10.819156   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:10.831492   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:10.831503   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:10.871879   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:10.871896   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:10.886347   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:10.886357   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:10.898648   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:10.898659   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:10.913219   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:10.913229   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:13.434131   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:13.361120   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:13.361444   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:13.389767   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:13.389885   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:13.409139   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:13.409217   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:13.422641   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:13.422716   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:13.434518   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:13.434574   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:13.445300   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:13.445363   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:13.455872   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:13.455944   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:13.466172   17343 logs.go:276] 0 containers: []
	W0304 04:22:13.466182   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:13.466241   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:13.481826   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:13.481844   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:13.481850   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:13.517237   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:13.517248   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:13.531691   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:13.531703   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:13.569927   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:13.569938   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:13.592496   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:13.592503   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:13.596872   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:13.596878   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:13.612819   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:13.612829   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:13.625114   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:13.625122   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:13.642471   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:13.642481   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:13.658229   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:13.658241   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:13.669942   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:13.669956   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:13.684603   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:13.684610   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:13.699843   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:13.699857   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:13.711560   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:13.711571   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:13.727779   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:13.727791   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:13.747226   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:13.747237   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:13.758277   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:13.758287   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:18.436337   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:18.436595   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:18.456767   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:18.456900   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:18.471918   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:18.471993   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:18.484279   17177 logs.go:276] 2 containers: [bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:18.484344   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:18.494938   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:18.495009   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:18.505361   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:18.505437   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:18.516061   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:18.516135   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:18.525893   17177 logs.go:276] 0 containers: []
	W0304 04:22:18.525902   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:18.525971   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:18.536746   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:18.536764   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:18.536770   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:18.576816   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:18.576825   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:18.581240   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:18.581247   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:18.615667   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:18.615681   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:18.634087   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:18.634099   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:18.648848   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:18.648857   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:18.661001   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:18.661013   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:18.674817   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:18.674828   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:18.687695   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:18.687709   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:18.699763   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:18.699777   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:18.717233   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:18.717243   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:18.728695   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:18.728705   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:18.753134   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:18.753141   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:16.278356   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:21.268020   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:21.280588   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:21.280704   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:21.292164   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:21.292236   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:21.302325   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:21.302394   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:21.312925   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:21.312985   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:21.324213   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:21.324286   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:21.342097   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:21.342169   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:21.352119   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:21.352193   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:21.362685   17343 logs.go:276] 0 containers: []
	W0304 04:22:21.362697   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:21.362761   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:21.374062   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:21.374079   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:21.374085   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:21.388932   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:21.388940   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:21.402899   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:21.402910   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:21.417179   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:21.417189   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:21.433830   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:21.433840   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:21.449051   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:21.449060   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:21.460406   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:21.460418   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:21.499803   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:21.499816   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:21.539517   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:21.539527   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:21.551527   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:21.551538   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:21.571209   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:21.571219   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:21.582569   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:21.582579   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:21.586800   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:21.586806   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:21.597876   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:21.597888   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:21.614916   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:21.614927   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:21.626799   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:21.626809   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:21.640517   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:21.640527   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:24.166176   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:26.270284   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:26.270464   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:26.290230   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:26.290327   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:26.305323   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:26.305406   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:26.317695   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:26.317771   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:26.327855   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:26.327926   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:26.338692   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:26.338760   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:26.348799   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:26.348869   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:26.358747   17177 logs.go:276] 0 containers: []
	W0304 04:22:26.358758   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:26.358821   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:26.369124   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:26.369138   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:26.369143   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:26.383755   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:26.383767   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:26.395625   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:26.395637   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:26.410220   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:26.410231   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:26.422221   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:26.422232   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:26.461954   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:22:26.461968   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:22:26.473407   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:26.473417   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:26.485205   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:26.485218   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:26.501008   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:26.501017   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:26.518090   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:22:26.518105   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:22:26.529439   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:26.529449   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:26.547405   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:26.547415   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:26.552552   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:26.552561   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:26.593123   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:26.593134   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:26.616367   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:26.616373   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:29.129841   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:29.168449   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:29.168543   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:29.201845   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:29.201931   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:29.214375   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:29.214455   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:29.226758   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:29.226827   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:29.237924   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:29.237991   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:29.256724   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:29.256793   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:29.267134   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:29.267197   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:29.277295   17343 logs.go:276] 0 containers: []
	W0304 04:22:29.277313   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:29.277366   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:34.132157   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:34.132335   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:29.288312   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:29.288330   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:29.288335   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:29.306046   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:29.306057   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:29.317271   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:29.317283   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:29.340049   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:29.340056   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:29.355359   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:29.355365   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:29.369936   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:29.369946   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:29.416539   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:29.416550   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:29.431161   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:29.431171   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:29.442757   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:29.442772   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:29.457804   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:29.457814   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:29.473083   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:29.473094   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:29.477261   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:29.477266   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:29.494730   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:29.494742   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:29.506419   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:29.506429   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:29.542935   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:29.542945   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:29.557198   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:29.557209   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:29.571588   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:29.571597   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:32.085214   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:34.144208   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:34.144282   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:34.154934   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:34.155000   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:34.167195   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:34.167273   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:34.177419   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:34.177485   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:34.187733   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:34.187802   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:34.198416   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:34.198484   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:34.208848   17177 logs.go:276] 0 containers: []
	W0304 04:22:34.208869   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:34.208936   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:34.219365   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:34.219381   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:34.219387   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:34.297924   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:34.297938   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:34.315023   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:34.315037   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:34.320138   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:22:34.320146   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:22:34.332392   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:34.332401   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:34.344675   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:34.344687   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:34.358772   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:34.358785   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:34.372576   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:34.372593   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:34.385427   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:34.385441   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:34.400797   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:34.400806   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:34.412956   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:34.412970   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:34.424985   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:34.424996   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:34.463819   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:22:34.463831   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:22:34.474896   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:34.474910   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:34.486113   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:34.486122   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:37.011128   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:37.085964   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:37.086108   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:37.099945   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:37.100022   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:37.110738   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:37.110806   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:37.121156   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:37.121231   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:37.131833   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:37.131915   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:37.142101   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:37.142169   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:37.153015   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:37.153092   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:37.163201   17343 logs.go:276] 0 containers: []
	W0304 04:22:37.163213   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:37.163276   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:37.174197   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:37.174214   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:37.174219   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:37.196536   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:37.196544   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:37.211589   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:37.211597   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:37.226022   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:37.226033   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:37.268945   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:37.268958   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:37.283471   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:37.283485   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:37.302011   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:37.302021   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:37.313546   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:37.313558   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:37.328493   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:37.328504   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:37.340907   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:37.340919   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:37.356584   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:37.356596   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:37.367682   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:37.367692   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:37.372425   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:37.372433   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:37.389627   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:37.389638   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:37.401123   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:37.401134   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:37.437807   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:37.437823   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:37.454768   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:37.454780   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:42.013551   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:42.013851   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:42.043988   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:42.044119   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:42.062848   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:42.062945   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:42.089820   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:42.089894   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:42.100966   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:42.101034   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:42.113747   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:42.113817   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:42.124280   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:42.124353   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:42.134903   17177 logs.go:276] 0 containers: []
	W0304 04:22:42.134914   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:42.134990   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:42.150571   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:42.150589   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:22:42.150594   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:22:42.165712   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:42.165725   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:42.181590   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:42.181602   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:42.207011   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:42.207020   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:42.247532   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:42.247542   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:42.252550   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:42.252559   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:42.266933   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:42.266944   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:42.284313   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:42.284324   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:42.295812   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:42.295824   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:42.335304   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:42.335315   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:42.347339   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:42.347349   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:42.362493   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:22:42.362504   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:22:42.373869   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:42.373880   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:42.386526   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:42.386536   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:42.401150   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:42.401159   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:39.975315   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:44.915564   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:44.977846   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:44.978017   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:45.005024   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:45.005126   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:45.018339   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:45.018424   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:45.030125   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:45.030194   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:45.040777   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:45.040849   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:45.051315   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:45.051385   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:45.063675   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:45.063748   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:45.073398   17343 logs.go:276] 0 containers: []
	W0304 04:22:45.073410   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:45.073470   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:45.084285   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:45.084303   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:45.084308   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:45.100814   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:45.100824   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:45.118896   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:45.118906   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:45.154728   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:45.154744   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:45.170080   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:45.170094   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:45.182219   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:45.182230   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:45.196711   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:45.196721   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:45.210662   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:45.210678   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:45.232958   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:45.232969   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:45.244934   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:45.244946   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:45.260610   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:45.260621   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:45.265038   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:45.265044   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:45.283282   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:45.283294   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:45.322527   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:45.322548   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:45.342834   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:45.342846   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:45.354262   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:45.354274   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:45.369801   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:45.369813   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:47.885564   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:49.918078   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:49.918342   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:49.945604   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:49.945710   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:49.963552   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:49.963645   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:49.978183   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:49.978247   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:49.989400   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:49.989460   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:49.999197   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:49.999268   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:50.009379   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:50.009438   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:50.019494   17177 logs.go:276] 0 containers: []
	W0304 04:22:50.019506   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:50.019566   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:50.030015   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:50.030031   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:50.030036   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:50.068884   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:50.068900   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:50.092220   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:50.092229   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:50.106538   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:22:50.106552   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:22:50.119280   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:50.119294   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:50.131197   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:50.131211   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:50.145203   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:50.145216   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:50.185946   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:50.185957   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:50.199600   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:50.199612   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:50.211539   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:50.211548   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:50.223104   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:50.223118   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:50.234706   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:22:50.234720   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:22:50.246692   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:50.246702   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:50.258832   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:50.258846   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:50.274020   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:50.274031   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:52.792790   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:52.888293   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:52.888621   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:52.929114   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:52.929243   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:52.946545   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:52.946628   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:52.959259   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:52.959334   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:52.970192   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:52.970262   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:52.980805   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:52.980878   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:52.991216   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:52.991290   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:53.001721   17343 logs.go:276] 0 containers: []
	W0304 04:22:53.001733   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:53.001791   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:53.012095   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:53.012111   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:53.012117   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:53.023553   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:53.023565   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:53.059339   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:53.059351   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:53.073696   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:53.073706   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:53.089014   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:53.089027   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:53.129956   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:53.129967   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:53.159322   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:53.159332   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:53.180483   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:53.180493   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:53.195524   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:53.195535   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:53.207340   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:53.207352   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:53.218750   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:53.218760   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:53.230070   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:53.230082   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:53.245169   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:53.245176   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:53.249066   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:53.249072   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:53.270076   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:53.270084   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:53.283079   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:53.283089   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:53.324820   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:53.324844   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:57.795659   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:57.796033   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:57.836536   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:22:57.836649   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:57.852888   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:22:57.852978   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:57.865797   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:22:57.865876   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:57.877365   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:22:57.877425   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:57.887991   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:22:57.888059   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:57.899428   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:22:57.899491   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:57.910218   17177 logs.go:276] 0 containers: []
	W0304 04:22:57.910227   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:57.910279   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:57.923329   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:22:57.923347   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:22:57.923352   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:22:57.942103   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:22:57.942114   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:22:57.953919   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:57.953931   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:57.991316   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:22:57.991324   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:22:58.011583   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:22:58.011595   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:22:58.023892   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:22:58.023905   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:22:58.042375   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:58.042384   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:58.047120   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:22:58.047129   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:22:58.062278   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:58.062293   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:58.085839   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:22:58.085862   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:22:58.097380   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:22:58.097391   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:22:58.113007   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:22:58.113018   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:22:58.126937   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:22:58.126947   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:22:58.139187   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:22:58.139202   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:58.151216   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:58.151227   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:55.851542   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:00.711109   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:00.852136   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:00.852355   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:00.882159   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:23:00.882277   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:00.900399   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:23:00.900490   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:00.913901   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:23:00.913974   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:00.925299   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:23:00.925372   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:00.936669   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:23:00.936736   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:00.947878   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:23:00.947948   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:00.958223   17343 logs.go:276] 0 containers: []
	W0304 04:23:00.958235   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:00.958301   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:00.968514   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:23:00.968533   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:23:00.968540   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:23:00.982321   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:23:00.982334   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:23:00.998762   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:23:00.998771   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:23:01.013449   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:23:01.013461   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:23:01.025155   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:23:01.025166   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:23:01.040512   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:23:01.040523   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:23:01.055631   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:23:01.055641   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:23:01.068670   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:23:01.068684   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:23:01.084340   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:23:01.084353   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:23:01.096009   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:01.096021   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:01.100579   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:01.100588   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:01.123493   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:01.123503   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:01.161073   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:23:01.161086   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:23:01.198435   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:23:01.198445   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:23:01.215956   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:23:01.215965   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:23:01.254062   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:23:01.254074   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:01.273281   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:01.273296   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:03.789882   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:05.712812   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:05.713014   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:05.729631   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:05.729706   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:05.742447   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:05.742512   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:05.754187   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:05.754271   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:05.765076   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:05.765148   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:05.777963   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:05.778031   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:05.788770   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:05.788842   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:05.799508   17177 logs.go:276] 0 containers: []
	W0304 04:23:05.799523   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:05.799580   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:05.810003   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:05.810019   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:05.810025   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:05.854096   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:05.854107   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:05.866137   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:05.866152   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:05.881129   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:05.881141   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:05.898515   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:05.898525   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:05.923867   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:05.923875   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:05.936165   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:05.936175   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:05.947777   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:05.947788   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:05.985693   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:05.985700   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:05.997835   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:05.997847   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:06.013554   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:06.013567   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:06.024824   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:06.024838   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:06.029183   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:06.029188   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:06.050689   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:06.050700   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:06.064776   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:06.064788   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:08.578032   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:08.792353   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:08.792525   17343 kubeadm.go:640] restartCluster took 4m3.866255458s
	W0304 04:23:08.792657   17343 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I0304 04:23:08.792723   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0304 04:23:09.822886   17343 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.030149208s)
	I0304 04:23:09.822949   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0304 04:23:09.828673   17343 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0304 04:23:09.831549   17343 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0304 04:23:09.834545   17343 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0304 04:23:09.834559   17343 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0304 04:23:09.853566   17343 kubeadm.go:322] [init] Using Kubernetes version: v1.24.1
	I0304 04:23:09.853642   17343 kubeadm.go:322] [preflight] Running pre-flight checks
	I0304 04:23:09.906393   17343 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0304 04:23:09.906443   17343 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0304 04:23:09.906488   17343 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0304 04:23:09.957297   17343 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0304 04:23:09.965452   17343 out.go:204]   - Generating certificates and keys ...
	I0304 04:23:09.965485   17343 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0304 04:23:09.965514   17343 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0304 04:23:09.965553   17343 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0304 04:23:09.965585   17343 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0304 04:23:09.965625   17343 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0304 04:23:09.965660   17343 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0304 04:23:09.965697   17343 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0304 04:23:09.965730   17343 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0304 04:23:09.965808   17343 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0304 04:23:09.965865   17343 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0304 04:23:09.965887   17343 kubeadm.go:322] [certs] Using the existing "sa" key
	I0304 04:23:09.965920   17343 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0304 04:23:10.019764   17343 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0304 04:23:10.265216   17343 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0304 04:23:10.428973   17343 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0304 04:23:10.496883   17343 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0304 04:23:10.529316   17343 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0304 04:23:10.529755   17343 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0304 04:23:10.529794   17343 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0304 04:23:10.602116   17343 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0304 04:23:13.580285   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:13.580396   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:13.592526   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:13.592599   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:13.604077   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:13.604152   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:13.615470   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:13.615543   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:13.626013   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:13.626090   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:13.640801   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:13.640877   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:13.653030   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:13.653103   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:13.664902   17177 logs.go:276] 0 containers: []
	W0304 04:23:13.664915   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:13.664977   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:13.678017   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:13.678036   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:13.678043   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:13.682790   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:13.682800   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:13.708566   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:13.708576   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:13.723993   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:13.724003   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:13.741182   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:13.741190   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:13.777305   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:13.777319   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:13.792358   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:13.792369   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:13.806679   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:13.806688   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:13.818946   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:13.818957   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:13.830597   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:13.830607   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:13.842787   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:13.842797   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:13.880421   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:13.880429   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:13.899956   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:13.899967   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:13.911857   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:13.911866   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:13.923677   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:13.923691   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:10.606374   17343 out.go:204]   - Booting up control plane ...
	I0304 04:23:10.606424   17343 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0304 04:23:10.606485   17343 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0304 04:23:10.606530   17343 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0304 04:23:10.606579   17343 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0304 04:23:10.606690   17343 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0304 04:23:15.105076   17343 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.502226 seconds
	I0304 04:23:15.105139   17343 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0304 04:23:15.109461   17343 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0304 04:23:15.616924   17343 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0304 04:23:15.617034   17343 kubeadm.go:322] [mark-control-plane] Marking the node stopped-upgrade-289000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0304 04:23:16.122739   17343 kubeadm.go:322] [bootstrap-token] Using token: javfic.twzpj02lkxs7rthh
	I0304 04:23:16.126926   17343 out.go:204]   - Configuring RBAC rules ...
	I0304 04:23:16.126991   17343 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0304 04:23:16.135625   17343 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0304 04:23:16.138589   17343 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0304 04:23:16.139779   17343 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0304 04:23:16.140852   17343 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0304 04:23:16.141987   17343 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0304 04:23:16.145929   17343 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0304 04:23:16.316781   17343 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0304 04:23:16.538919   17343 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0304 04:23:16.539604   17343 kubeadm.go:322] 
	I0304 04:23:16.539637   17343 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0304 04:23:16.539641   17343 kubeadm.go:322] 
	I0304 04:23:16.539689   17343 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0304 04:23:16.539694   17343 kubeadm.go:322] 
	I0304 04:23:16.539712   17343 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0304 04:23:16.539753   17343 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0304 04:23:16.539788   17343 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0304 04:23:16.539791   17343 kubeadm.go:322] 
	I0304 04:23:16.539818   17343 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0304 04:23:16.539822   17343 kubeadm.go:322] 
	I0304 04:23:16.539850   17343 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0304 04:23:16.539854   17343 kubeadm.go:322] 
	I0304 04:23:16.539879   17343 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0304 04:23:16.539918   17343 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0304 04:23:16.539962   17343 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0304 04:23:16.539965   17343 kubeadm.go:322] 
	I0304 04:23:16.540013   17343 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0304 04:23:16.540053   17343 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0304 04:23:16.540057   17343 kubeadm.go:322] 
	I0304 04:23:16.540112   17343 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token javfic.twzpj02lkxs7rthh \
	I0304 04:23:16.540166   17343 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d9011201d4995caae6ff8661400631de0c6362c7df9a896efc3c38706beefef \
	I0304 04:23:16.540184   17343 kubeadm.go:322] 	--control-plane 
	I0304 04:23:16.540187   17343 kubeadm.go:322] 
	I0304 04:23:16.540230   17343 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0304 04:23:16.540233   17343 kubeadm.go:322] 
	I0304 04:23:16.540279   17343 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token javfic.twzpj02lkxs7rthh \
	I0304 04:23:16.540330   17343 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d9011201d4995caae6ff8661400631de0c6362c7df9a896efc3c38706beefef 
	I0304 04:23:16.540436   17343 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0304 04:23:16.540501   17343 cni.go:84] Creating CNI manager for ""
	I0304 04:23:16.540510   17343 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:23:16.543338   17343 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0304 04:23:16.551294   17343 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0304 04:23:16.554432   17343 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0304 04:23:16.559121   17343 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0304 04:23:16.559164   17343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0304 04:23:16.559180   17343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ab57ba9f65fd4cb3ac8815e4f9baeeca5604e645 minikube.k8s.io/name=stopped-upgrade-289000 minikube.k8s.io/updated_at=2024_03_04T04_23_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0304 04:23:16.602469   17343 kubeadm.go:1088] duration metric: took 43.340458ms to wait for elevateKubeSystemPrivileges.
	I0304 04:23:16.602477   17343 ops.go:34] apiserver oom_adj: -16
	I0304 04:23:16.602492   17343 host.go:66] Checking if "stopped-upgrade-289000" exists ...
	I0304 04:23:16.603205   17343 main.go:141] libmachine: Using SSH client type: external
	I0304 04:23:16.603223   17343 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa (-rw-------)
	I0304 04:23:16.603239   17343 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa -p 52757] /usr/bin/ssh <nil>}
	I0304 04:23:16.603250   17343 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa -p 52757 -f -NTL 52792:localhost:8443
	I0304 04:23:16.647800   17343 kubeadm.go:406] StartCluster complete in 4m11.780622708s
	I0304 04:23:16.647851   17343 settings.go:142] acquiring lock: {Name:mk5ed2e5b4fa3bf37e56838441d7d3c0b1b72b0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:23:16.647948   17343 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:23:16.648527   17343 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/kubeconfig: {Name:mkd9e78edd5ce89511d1f03c76ad35ee3697edbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:23:16.648729   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0304 04:23:16.648814   17343 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0304 04:23:16.648860   17343 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:23:16.648869   17343 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-289000"
	I0304 04:23:16.648880   17343 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-289000"
	W0304 04:23:16.648883   17343 addons.go:243] addon storage-provisioner should already be in state true
	I0304 04:23:16.648914   17343 host.go:66] Checking if "stopped-upgrade-289000" exists ...
	I0304 04:23:16.648923   17343 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-289000"
	I0304 04:23:16.648929   17343 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-289000"
	I0304 04:23:16.649048   17343 kapi.go:59] client config for stopped-upgrade-289000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.key", CAFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10254f7d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0304 04:23:16.649988   17343 kapi.go:59] client config for stopped-upgrade-289000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.key", CAFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10254f7d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0304 04:23:16.650100   17343 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-289000"
	W0304 04:23:16.650105   17343 addons.go:243] addon default-storageclass should already be in state true
	I0304 04:23:16.650112   17343 host.go:66] Checking if "stopped-upgrade-289000" exists ...
	I0304 04:23:16.654265   17343 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:23:16.438219   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:16.658114   17343 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0304 04:23:16.658121   17343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0304 04:23:16.658130   17343 sshutil.go:53] new ssh client: &{IP:localhost Port:52757 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa Username:docker}
	I0304 04:23:16.658770   17343 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0304 04:23:16.658777   17343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0304 04:23:16.658782   17343 sshutil.go:53] new ssh client: &{IP:localhost Port:52757 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa Username:docker}
	I0304 04:23:16.680448   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           10.0.2.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0304 04:23:16.719210   17343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0304 04:23:16.731348   17343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0304 04:23:17.143699   17343 start.go:929] {"host.minikube.internal": 10.0.2.2} host record injected into CoreDNS's ConfigMap
	I0304 04:23:21.440271   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:21.440532   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:21.462153   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:21.462264   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:21.477127   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:21.477221   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:21.490202   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:21.490276   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:21.500857   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:21.500918   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:21.511204   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:21.511275   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:21.528334   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:21.528404   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:21.539802   17177 logs.go:276] 0 containers: []
	W0304 04:23:21.539815   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:21.539874   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:21.553523   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:21.553540   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:21.553545   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:21.565552   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:21.565561   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:21.583208   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:21.583219   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:21.594555   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:21.594569   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:21.599056   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:21.599065   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:21.618119   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:21.618127   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:21.629857   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:21.629871   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:21.641210   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:21.641221   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:21.655748   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:21.655757   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:21.675824   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:21.675836   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:21.699456   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:21.699466   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:21.738245   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:21.738253   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:21.774489   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:21.774503   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:21.786205   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:21.786215   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:21.798573   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:21.798585   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:24.315611   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:29.317966   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:29.318116   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:29.329210   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:29.329279   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:29.340320   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:29.340398   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:29.350798   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:29.350873   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:29.361041   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:29.361115   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:29.371895   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:29.371963   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:29.382844   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:29.382916   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:29.397715   17177 logs.go:276] 0 containers: []
	W0304 04:23:29.397728   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:29.397786   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:29.413021   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:29.413037   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:29.413043   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:29.428125   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:29.428138   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:29.446114   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:29.446125   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:29.471322   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:29.471337   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:29.486310   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:29.486323   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:29.498140   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:29.498180   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:29.510116   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:29.510126   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:29.522631   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:29.522644   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:29.534292   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:29.534303   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:29.573400   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:29.573410   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:29.610825   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:29.610841   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:29.625317   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:29.625328   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:29.637782   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:29.637796   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:29.650355   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:29.650371   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:29.655174   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:29.655192   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:32.169713   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:37.171984   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:37.172205   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:37.193603   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:37.193703   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:37.209337   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:37.209416   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:37.222412   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:37.222480   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:37.233570   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:37.233641   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:37.244207   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:37.244273   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:37.261333   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:37.261396   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:37.281590   17177 logs.go:276] 0 containers: []
	W0304 04:23:37.281602   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:37.281661   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:37.294559   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:37.294576   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:37.294581   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:37.331111   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:37.331123   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:37.346984   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:37.346997   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:37.358904   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:37.358919   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:37.370575   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:37.370587   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:37.391769   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:37.391779   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:37.430352   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:37.430360   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:37.443796   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:37.443805   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:37.461355   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:37.461365   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:37.479183   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:37.479193   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:37.490995   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:37.491007   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:37.515305   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:37.515312   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:37.526966   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:37.526975   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:37.532209   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:37.532217   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:37.546470   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:37.546483   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:40.060531   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0304 04:23:46.651266   17343 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "stopped-upgrade-289000" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	E0304 04:23:46.651283   17343 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	I0304 04:23:46.651294   17343 start.go:223] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:23:46.655537   17343 out.go:177] * Verifying Kubernetes components...
	I0304 04:23:46.662461   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0304 04:23:46.668205   17343 api_server.go:52] waiting for apiserver process to appear ...
	I0304 04:23:46.668251   17343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:23:46.672859   17343 api_server.go:72] duration metric: took 21.551541ms to wait for apiserver process to appear ...
	I0304 04:23:46.672867   17343 api_server.go:88] waiting for apiserver healthz status ...
	I0304 04:23:46.672877   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0304 04:23:47.145823   17343 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0304 04:23:47.150323   17343 out.go:177] * Enabled addons: storage-provisioner
	I0304 04:23:45.063179   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:45.063453   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:45.088149   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:45.088268   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:45.104003   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:45.104090   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:45.119544   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:45.119616   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:45.130355   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:45.130428   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:45.140343   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:45.140410   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:45.151145   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:45.151205   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:45.161438   17177 logs.go:276] 0 containers: []
	W0304 04:23:45.161450   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:45.161517   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:45.177916   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:45.177932   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:45.177938   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:45.218363   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:45.218371   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:45.230012   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:45.230025   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:45.244181   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:45.244192   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:45.255877   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:45.255887   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:45.280098   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:45.280106   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:45.291724   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:45.291733   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:45.296597   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:45.296607   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:45.333017   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:45.333028   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:45.347863   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:45.347875   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:45.366253   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:45.366264   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:45.378143   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:45.378152   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:45.396286   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:45.396300   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:45.410496   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:45.410505   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:45.422342   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:45.422355   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:47.943379   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:47.157149   17343 addons.go:505] enable addons completed in 30.50853325s: enabled=[storage-provisioner]
	I0304 04:23:52.945917   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:52.946065   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:52.961627   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:23:52.961710   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:52.974181   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:23:52.974261   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:52.985724   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:23:52.985789   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:52.996509   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:23:52.996584   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:53.006686   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:23:53.006749   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:53.016859   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:23:53.016936   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:53.027043   17177 logs.go:276] 0 containers: []
	W0304 04:23:53.027054   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:53.027111   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:53.038299   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:23:53.038317   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:23:53.038322   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:23:53.052569   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:23:53.052581   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:23:53.064883   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:23:53.064895   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:23:53.080099   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:23:53.080110   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:23:53.098294   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:53.098307   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:53.133453   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:23:53.133467   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:23:53.144983   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:23:53.144998   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:23:53.156329   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:53.156341   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:53.181054   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:23:53.181065   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:23:53.193467   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:23:53.193481   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:23:53.207695   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:23:53.207707   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:23:53.219564   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:53.219578   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:53.258495   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:23:53.258504   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:23:53.270356   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:23:53.270366   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:53.281848   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:53.281862   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:51.674983   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:51.675020   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:55.788492   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:56.675318   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:56.675345   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:00.790565   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:00.790734   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:24:00.804876   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:24:00.804959   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:24:00.816468   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:24:00.816535   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:24:00.827140   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:24:00.827210   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:24:00.837223   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:24:00.837287   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:24:00.848641   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:24:00.848714   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:24:00.859495   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:24:00.859563   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:24:00.869338   17177 logs.go:276] 0 containers: []
	W0304 04:24:00.869349   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:24:00.869409   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:24:00.879465   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:24:00.879479   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:24:00.879484   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:24:00.918168   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:24:00.918177   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:24:00.930614   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:24:00.930628   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:24:00.942991   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:24:00.943003   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:24:00.957676   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:24:00.957690   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:24:00.975248   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:24:00.975259   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:24:00.987196   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:24:00.987209   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:24:01.010037   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:24:01.010045   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:24:01.021725   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:24:01.021734   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:24:01.057571   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:24:01.057583   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:24:01.072764   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:24:01.072775   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:24:01.085686   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:24:01.085696   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:24:01.097402   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:24:01.097413   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:24:01.101725   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:24:01.101732   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:24:01.115374   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:24:01.115383   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:24:03.632264   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:01.675661   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:01.675726   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:08.634865   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:08.635257   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:24:08.674424   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:24:08.674519   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:24:08.691457   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:24:08.691530   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:24:08.703516   17177 logs.go:276] 4 containers: [ac9fa5422c38 127d7b70714a bb28ce9fb69b d7ccee857a9c]
	I0304 04:24:08.703591   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:24:08.714578   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:24:08.714655   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:24:08.725121   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:24:08.725198   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:24:08.736089   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:24:08.736153   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:24:08.746501   17177 logs.go:276] 0 containers: []
	W0304 04:24:08.746512   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:24:08.746573   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:24:08.756502   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:24:08.756518   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:24:08.756523   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:24:08.770696   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:24:08.770706   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:24:08.782528   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:24:08.782538   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:24:08.821082   17177 logs.go:123] Gathering logs for coredns [bb28ce9fb69b] ...
	I0304 04:24:08.821093   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb28ce9fb69b"
	I0304 04:24:08.832849   17177 logs.go:123] Gathering logs for coredns [d7ccee857a9c] ...
	I0304 04:24:08.832860   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7ccee857a9c"
	I0304 04:24:08.852113   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:24:08.852123   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:24:08.863569   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:24:08.863580   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:24:08.876257   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:24:08.876267   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:24:08.888408   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:24:08.888419   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:24:08.904388   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:24:08.904399   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:24:08.919267   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:24:08.919279   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:24:08.947366   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:24:08.947376   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:24:08.982308   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:24:08.982318   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:24:08.997908   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:24:08.997916   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:24:09.021582   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:24:09.021593   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:24:06.676589   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:06.676655   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:11.527875   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:11.677416   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:11.677475   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:16.530107   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:16.530209   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:24:16.541266   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:24:16.541339   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:24:16.551450   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:24:16.551513   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:24:16.562340   17177 logs.go:276] 4 containers: [0d8f3b5bfecb 705f11ca41c7 ac9fa5422c38 127d7b70714a]
	I0304 04:24:16.562420   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:24:16.572772   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:24:16.572836   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:24:16.583214   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:24:16.583280   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:24:16.594192   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:24:16.594248   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:24:16.604355   17177 logs.go:276] 0 containers: []
	W0304 04:24:16.604369   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:24:16.604422   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:24:16.614778   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:24:16.614797   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:24:16.614801   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:24:16.629093   17177 logs.go:123] Gathering logs for coredns [0d8f3b5bfecb] ...
	I0304 04:24:16.629103   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8f3b5bfecb"
	I0304 04:24:16.640381   17177 logs.go:123] Gathering logs for coredns [705f11ca41c7] ...
	I0304 04:24:16.640392   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705f11ca41c7"
	I0304 04:24:16.652050   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:24:16.652061   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:24:16.691443   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:24:16.691452   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:24:16.696499   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:24:16.696505   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:24:16.709049   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:24:16.709063   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:24:16.721496   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:24:16.721510   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:24:16.741029   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:24:16.741045   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:24:16.757214   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:24:16.757228   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:24:16.771060   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:24:16.771070   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:24:16.810112   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:24:16.810124   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:24:16.824151   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:24:16.824162   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:24:16.845819   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:24:16.845829   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:24:16.857558   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:24:16.857570   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:24:16.678522   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:16.678538   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:19.383106   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:21.680278   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:21.680390   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:24.385357   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:24.385491   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:24:24.399100   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:24:24.399176   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:24:24.409972   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:24:24.410050   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:24:24.420723   17177 logs.go:276] 4 containers: [0d8f3b5bfecb 705f11ca41c7 ac9fa5422c38 127d7b70714a]
	I0304 04:24:24.420796   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:24:24.431930   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:24:24.431999   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:24:24.442508   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:24:24.442571   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:24:24.454644   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:24:24.454715   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:24:24.464633   17177 logs.go:276] 0 containers: []
	W0304 04:24:24.464643   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:24:24.464699   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:24:24.474905   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:24:24.474919   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:24:24.474932   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:24:24.496121   17177 logs.go:123] Gathering logs for coredns [0d8f3b5bfecb] ...
	I0304 04:24:24.496132   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8f3b5bfecb"
	I0304 04:24:24.507763   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:24:24.507776   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:24:24.519746   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:24:24.519757   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:24:24.542660   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:24:24.542669   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:24:24.554250   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:24:24.554261   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:24:24.590959   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:24:24.590971   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:24:24.608315   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:24:24.608326   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:24:24.624124   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:24:24.624136   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:24:24.638708   17177 logs.go:123] Gathering logs for coredns [705f11ca41c7] ...
	I0304 04:24:24.638721   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705f11ca41c7"
	I0304 04:24:24.650519   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:24:24.650530   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:24:24.666108   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:24:24.666121   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:24:24.678044   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:24:24.678054   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:24:24.682270   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:24:24.682276   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:24:24.695313   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:24:24.695327   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:24:27.236987   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:26.682394   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:26.682432   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:32.238436   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:32.238702   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:24:32.264791   17177 logs.go:276] 1 containers: [d612fc92cd59]
	I0304 04:24:32.264916   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:24:32.281870   17177 logs.go:276] 1 containers: [b8352f802244]
	I0304 04:24:32.281949   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:24:32.295349   17177 logs.go:276] 4 containers: [0d8f3b5bfecb 705f11ca41c7 ac9fa5422c38 127d7b70714a]
	I0304 04:24:32.295426   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:24:32.307083   17177 logs.go:276] 1 containers: [c0cee593a1ee]
	I0304 04:24:32.307150   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:24:32.318177   17177 logs.go:276] 1 containers: [34fd4950fbd5]
	I0304 04:24:32.318252   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:24:32.329098   17177 logs.go:276] 1 containers: [0e3ba9431b65]
	I0304 04:24:32.329197   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:24:32.339400   17177 logs.go:276] 0 containers: []
	W0304 04:24:32.339415   17177 logs.go:278] No container was found matching "kindnet"
	I0304 04:24:32.339490   17177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:24:32.349907   17177 logs.go:276] 1 containers: [1ff2944a74d3]
	I0304 04:24:32.349924   17177 logs.go:123] Gathering logs for coredns [705f11ca41c7] ...
	I0304 04:24:32.349928   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705f11ca41c7"
	I0304 04:24:32.361958   17177 logs.go:123] Gathering logs for kube-scheduler [c0cee593a1ee] ...
	I0304 04:24:32.361971   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cee593a1ee"
	I0304 04:24:32.376896   17177 logs.go:123] Gathering logs for kube-controller-manager [0e3ba9431b65] ...
	I0304 04:24:32.376911   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e3ba9431b65"
	I0304 04:24:32.394835   17177 logs.go:123] Gathering logs for kubelet ...
	I0304 04:24:32.394846   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:24:32.433442   17177 logs.go:123] Gathering logs for coredns [ac9fa5422c38] ...
	I0304 04:24:32.433451   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac9fa5422c38"
	I0304 04:24:32.445279   17177 logs.go:123] Gathering logs for storage-provisioner [1ff2944a74d3] ...
	I0304 04:24:32.445290   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ff2944a74d3"
	I0304 04:24:32.456701   17177 logs.go:123] Gathering logs for container status ...
	I0304 04:24:32.456712   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:24:32.468277   17177 logs.go:123] Gathering logs for dmesg ...
	I0304 04:24:32.468288   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:24:32.472811   17177 logs.go:123] Gathering logs for coredns [127d7b70714a] ...
	I0304 04:24:32.472821   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 127d7b70714a"
	I0304 04:24:32.484868   17177 logs.go:123] Gathering logs for kube-proxy [34fd4950fbd5] ...
	I0304 04:24:32.484879   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34fd4950fbd5"
	I0304 04:24:32.496099   17177 logs.go:123] Gathering logs for coredns [0d8f3b5bfecb] ...
	I0304 04:24:32.496112   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d8f3b5bfecb"
	I0304 04:24:32.510539   17177 logs.go:123] Gathering logs for Docker ...
	I0304 04:24:32.510551   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:24:32.532695   17177 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:24:32.532703   17177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:24:32.568827   17177 logs.go:123] Gathering logs for kube-apiserver [d612fc92cd59] ...
	I0304 04:24:32.568837   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d612fc92cd59"
	I0304 04:24:32.582866   17177 logs.go:123] Gathering logs for etcd [b8352f802244] ...
	I0304 04:24:32.582876   17177 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8352f802244"
	I0304 04:24:31.684637   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:31.684715   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:35.099128   17177 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:36.687145   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:36.687193   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:40.101554   17177 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:40.106683   17177 out.go:177] 
	W0304 04:24:40.111545   17177 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0304 04:24:40.111575   17177 out.go:239] * 
	W0304 04:24:40.113516   17177 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:24:40.119628   17177 out.go:177] 
	I0304 04:24:41.689443   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:41.689466   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:46.691249   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:46.691482   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:24:46.710896   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:24:46.710996   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:24:46.725608   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:24:46.725685   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:24:46.739453   17343 logs.go:276] 2 containers: [52c78c839fc7 97f2e9ac37d2]
	I0304 04:24:46.739520   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:24:46.749983   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:24:46.750050   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:24:46.760662   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:24:46.760749   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:24:46.770861   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:24:46.770930   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:24:46.781035   17343 logs.go:276] 0 containers: []
	W0304 04:24:46.781046   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:24:46.781103   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:24:46.791983   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:24:46.791997   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:24:46.792002   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:24:46.807201   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:24:46.807214   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:24:46.825596   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:24:46.825607   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:24:46.836941   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:24:46.836954   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:24:46.841581   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:24:46.841590   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:24:46.877127   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:24:46.877138   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:24:46.891955   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:24:46.891966   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:24:46.903977   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:24:46.903987   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:24:46.923606   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:24:46.923618   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:24:46.935926   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:24:46.935943   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:24:46.959560   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:24:46.959592   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:24:46.973255   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:24:46.973270   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:24:47.007101   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:24:47.007115   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:24:49.523763   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-03-04 12:15:11 UTC, ends at Mon 2024-03-04 12:24:56 UTC. --
	Mar 04 12:24:34 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:34Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 04 12:24:34 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:34Z" level=error msg="ContainerStats resp: {0x40008d9b00 linux}"
	Mar 04 12:24:34 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:34Z" level=error msg="ContainerStats resp: {0x40008b8140 linux}"
	Mar 04 12:24:35 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:35Z" level=error msg="ContainerStats resp: {0x40008b9800 linux}"
	Mar 04 12:24:36 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:36Z" level=error msg="ContainerStats resp: {0x40009943c0 linux}"
	Mar 04 12:24:36 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:36Z" level=error msg="ContainerStats resp: {0x4000994d80 linux}"
	Mar 04 12:24:36 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:36Z" level=error msg="ContainerStats resp: {0x40007bd080 linux}"
	Mar 04 12:24:36 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:36Z" level=error msg="ContainerStats resp: {0x4000995100 linux}"
	Mar 04 12:24:36 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:36Z" level=error msg="ContainerStats resp: {0x4000995b40 linux}"
	Mar 04 12:24:36 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:36Z" level=error msg="ContainerStats resp: {0x40007bdd40 linux}"
	Mar 04 12:24:36 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:36Z" level=error msg="ContainerStats resp: {0x4000995e80 linux}"
	Mar 04 12:24:39 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:39Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 04 12:24:44 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:44Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 04 12:24:47 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:47Z" level=error msg="ContainerStats resp: {0x400084e2c0 linux}"
	Mar 04 12:24:47 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:47Z" level=error msg="ContainerStats resp: {0x400084e7c0 linux}"
	Mar 04 12:24:48 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:48Z" level=error msg="ContainerStats resp: {0x400062b0c0 linux}"
	Mar 04 12:24:49 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:49Z" level=error msg="ContainerStats resp: {0x400009c540 linux}"
	Mar 04 12:24:49 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:49Z" level=error msg="ContainerStats resp: {0x400009d9c0 linux}"
	Mar 04 12:24:49 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:49Z" level=error msg="ContainerStats resp: {0x40007bc900 linux}"
	Mar 04 12:24:49 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:49Z" level=error msg="ContainerStats resp: {0x40007bcd40 linux}"
	Mar 04 12:24:49 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:49Z" level=error msg="ContainerStats resp: {0x40003a3b40 linux}"
	Mar 04 12:24:49 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:49Z" level=error msg="ContainerStats resp: {0x4000994640 linux}"
	Mar 04 12:24:49 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:49Z" level=error msg="ContainerStats resp: {0x40007bdd00 linux}"
	Mar 04 12:24:49 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:49Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 04 12:24:54 running-upgrade-156000 cri-dockerd[3095]: time="2024-03-04T12:24:54Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	0d8f3b5bfecbb       edaa71f2aee88       45 seconds ago      Running             coredns                   2                   b488d34875069
	705f11ca41c75       edaa71f2aee88       45 seconds ago      Running             coredns                   2                   6a11af9da550d
	ac9fa5422c381       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b488d34875069
	127d7b70714a5       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   6a11af9da550d
	34fd4950fbd5b       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   fe8fd858a7003
	1ff2944a74d34       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   c94c321630254
	b8352f802244c       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   5ee1c97820f0f
	c0cee593a1ee1       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   3e685ea77e91a
	d612fc92cd59c       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   b15aa62f2069f
	0e3ba9431b652       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   a89a936e02878
	
	
	==> coredns [0d8f3b5bfecb] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 15ba990df895ecddba8ce0ceabdc0ab8
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:37743 - 10458 "HINFO IN 5898472966510598005.6630356324781493803. udp 57 false 512" - - 0 6.002322404s
	[ERROR] plugin/errors: 2 5898472966510598005.6630356324781493803. HINFO: read udp 10.244.0.3:41876->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:55473 - 17307 "HINFO IN 5898472966510598005.6630356324781493803. udp 57 false 512" - - 0 6.002460684s
	[ERROR] plugin/errors: 2 5898472966510598005.6630356324781493803. HINFO: read udp 10.244.0.3:42692->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:55518 - 2880 "HINFO IN 5898472966510598005.6630356324781493803. udp 57 false 512" - - 0 4.001381599s
	[ERROR] plugin/errors: 2 5898472966510598005.6630356324781493803. HINFO: read udp 10.244.0.3:59016->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:46321 - 23336 "HINFO IN 5898472966510598005.6630356324781493803. udp 57 false 512" - - 0 2.001211855s
	[ERROR] plugin/errors: 2 5898472966510598005.6630356324781493803. HINFO: read udp 10.244.0.3:47360->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:41534 - 47350 "HINFO IN 5898472966510598005.6630356324781493803. udp 57 false 512" - - 0 2.000703046s
	[ERROR] plugin/errors: 2 5898472966510598005.6630356324781493803. HINFO: read udp 10.244.0.3:54684->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:48039 - 40340 "HINFO IN 5898472966510598005.6630356324781493803. udp 57 false 512" - - 0 2.001100566s
	[ERROR] plugin/errors: 2 5898472966510598005.6630356324781493803. HINFO: read udp 10.244.0.3:58335->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:36051 - 63837 "HINFO IN 5898472966510598005.6630356324781493803. udp 57 false 512" - - 0 2.000374383s
	[ERROR] plugin/errors: 2 5898472966510598005.6630356324781493803. HINFO: read udp 10.244.0.3:56834->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:56571 - 62069 "HINFO IN 5898472966510598005.6630356324781493803. udp 57 false 512" - - 0 2.000708653s
	[ERROR] plugin/errors: 2 5898472966510598005.6630356324781493803. HINFO: read udp 10.244.0.3:39253->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:46246 - 50930 "HINFO IN 5898472966510598005.6630356324781493803. udp 57 false 512" - - 0 2.000295883s
	[ERROR] plugin/errors: 2 5898472966510598005.6630356324781493803. HINFO: read udp 10.244.0.3:48850->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:56312 - 22019 "HINFO IN 5898472966510598005.6630356324781493803. udp 57 false 512" - - 0 2.000677693s
	[ERROR] plugin/errors: 2 5898472966510598005.6630356324781493803. HINFO: read udp 10.244.0.3:51162->10.0.2.3:53: i/o timeout
	
	
	==> coredns [127d7b70714a] <==
	[INFO] plugin/reload: Running configuration MD5 = 15ba990df895ecddba8ce0ceabdc0ab8
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:54142 - 43697 "HINFO IN 9137520483588749900.3562002678980267042. udp 57 false 512" - - 0 6.001476244s
	[ERROR] plugin/errors: 2 9137520483588749900.3562002678980267042. HINFO: read udp 10.244.0.2:39145->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:45645 - 54960 "HINFO IN 9137520483588749900.3562002678980267042. udp 57 false 512" - - 0 2.000883038s
	[ERROR] plugin/errors: 2 9137520483588749900.3562002678980267042. HINFO: read udp 10.244.0.2:60413->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:48629 - 45534 "HINFO IN 9137520483588749900.3562002678980267042. udp 57 false 512" - - 0 6.002372335s
	[ERROR] plugin/errors: 2 9137520483588749900.3562002678980267042. HINFO: read udp 10.244.0.2:38814->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:39690 - 47893 "HINFO IN 9137520483588749900.3562002678980267042. udp 57 false 512" - - 0 2.001116478s
	[ERROR] plugin/errors: 2 9137520483588749900.3562002678980267042. HINFO: read udp 10.244.0.2:35493->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:41586 - 27954 "HINFO IN 9137520483588749900.3562002678980267042. udp 57 false 512" - - 0 2.000790249s
	[ERROR] plugin/errors: 2 9137520483588749900.3562002678980267042. HINFO: read udp 10.244.0.2:40641->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:52563 - 31334 "HINFO IN 9137520483588749900.3562002678980267042. udp 57 false 512" - - 0 2.001108096s
	[ERROR] plugin/errors: 2 9137520483588749900.3562002678980267042. HINFO: read udp 10.244.0.2:60473->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:47294 - 45588 "HINFO IN 9137520483588749900.3562002678980267042. udp 57 false 512" - - 0 2.000679232s
	[ERROR] plugin/errors: 2 9137520483588749900.3562002678980267042. HINFO: read udp 10.244.0.2:40959->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:48936 - 54218 "HINFO IN 9137520483588749900.3562002678980267042. udp 57 false 512" - - 0 2.00053628s
	[ERROR] plugin/errors: 2 9137520483588749900.3562002678980267042. HINFO: read udp 10.244.0.2:36872->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:46122 - 32818 "HINFO IN 9137520483588749900.3562002678980267042. udp 57 false 512" - - 0 2.000496282s
	[ERROR] plugin/errors: 2 9137520483588749900.3562002678980267042. HINFO: read udp 10.244.0.2:49270->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:44863 - 44654 "HINFO IN 9137520483588749900.3562002678980267042. udp 57 false 512" - - 0 2.000725445s
	[ERROR] plugin/errors: 2 9137520483588749900.3562002678980267042. HINFO: read udp 10.244.0.2:53627->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [705f11ca41c7] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 15ba990df895ecddba8ce0ceabdc0ab8
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:59364 - 15241 "HINFO IN 3947666887188300032.3613437389298074030. udp 57 false 512" - - 0 6.003359734s
	[ERROR] plugin/errors: 2 3947666887188300032.3613437389298074030. HINFO: read udp 10.244.0.2:42581->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:51390 - 42770 "HINFO IN 3947666887188300032.3613437389298074030. udp 57 false 512" - - 0 6.001967477s
	[ERROR] plugin/errors: 2 3947666887188300032.3613437389298074030. HINFO: read udp 10.244.0.2:44378->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:40763 - 21092 "HINFO IN 3947666887188300032.3613437389298074030. udp 57 false 512" - - 0 4.000545936s
	[ERROR] plugin/errors: 2 3947666887188300032.3613437389298074030. HINFO: read udp 10.244.0.2:40276->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:46877 - 17007 "HINFO IN 3947666887188300032.3613437389298074030. udp 57 false 512" - - 0 2.000846439s
	[ERROR] plugin/errors: 2 3947666887188300032.3613437389298074030. HINFO: read udp 10.244.0.2:59472->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:53418 - 33274 "HINFO IN 3947666887188300032.3613437389298074030. udp 57 false 512" - - 0 2.000519297s
	[ERROR] plugin/errors: 2 3947666887188300032.3613437389298074030. HINFO: read udp 10.244.0.2:58043->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:35914 - 34102 "HINFO IN 3947666887188300032.3613437389298074030. udp 57 false 512" - - 0 2.000457778s
	[ERROR] plugin/errors: 2 3947666887188300032.3613437389298074030. HINFO: read udp 10.244.0.2:42731->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:56335 - 65356 "HINFO IN 3947666887188300032.3613437389298074030. udp 57 false 512" - - 0 2.000243216s
	[ERROR] plugin/errors: 2 3947666887188300032.3613437389298074030. HINFO: read udp 10.244.0.2:47093->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:35047 - 61920 "HINFO IN 3947666887188300032.3613437389298074030. udp 57 false 512" - - 0 2.000723861s
	[ERROR] plugin/errors: 2 3947666887188300032.3613437389298074030. HINFO: read udp 10.244.0.2:60952->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:53409 - 14423 "HINFO IN 3947666887188300032.3613437389298074030. udp 57 false 512" - - 0 2.001130338s
	[ERROR] plugin/errors: 2 3947666887188300032.3613437389298074030. HINFO: read udp 10.244.0.2:38336->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:43084 - 41482 "HINFO IN 3947666887188300032.3613437389298074030. udp 57 false 512" - - 0 2.0008994s
	[ERROR] plugin/errors: 2 3947666887188300032.3613437389298074030. HINFO: read udp 10.244.0.2:57510->10.0.2.3:53: i/o timeout
	
	
	==> coredns [ac9fa5422c38] <==
	[INFO] plugin/reload: Running configuration MD5 = 15ba990df895ecddba8ce0ceabdc0ab8
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:35548 - 55107 "HINFO IN 1632789727683231193.7574983872383565410. udp 57 false 512" - - 0 6.001276037s
	[ERROR] plugin/errors: 2 1632789727683231193.7574983872383565410. HINFO: read udp 10.244.0.3:38628->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:59063 - 38902 "HINFO IN 1632789727683231193.7574983872383565410. udp 57 false 512" - - 0 6.002065378s
	[ERROR] plugin/errors: 2 1632789727683231193.7574983872383565410. HINFO: read udp 10.244.0.3:45485->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:35760 - 53657 "HINFO IN 1632789727683231193.7574983872383565410. udp 57 false 512" - - 0 4.002369812s
	[ERROR] plugin/errors: 2 1632789727683231193.7574983872383565410. HINFO: read udp 10.244.0.3:39694->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:55101 - 61774 "HINFO IN 1632789727683231193.7574983872383565410. udp 57 false 512" - - 0 2.00091877s
	[ERROR] plugin/errors: 2 1632789727683231193.7574983872383565410. HINFO: read udp 10.244.0.3:33630->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:60741 - 8072 "HINFO IN 1632789727683231193.7574983872383565410. udp 57 false 512" - - 0 2.001346913s
	[ERROR] plugin/errors: 2 1632789727683231193.7574983872383565410. HINFO: read udp 10.244.0.3:36406->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:45728 - 33249 "HINFO IN 1632789727683231193.7574983872383565410. udp 57 false 512" - - 0 2.000191974s
	[ERROR] plugin/errors: 2 1632789727683231193.7574983872383565410. HINFO: read udp 10.244.0.3:46425->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:45979 - 43195 "HINFO IN 1632789727683231193.7574983872383565410. udp 57 false 512" - - 0 2.000286651s
	[ERROR] plugin/errors: 2 1632789727683231193.7574983872383565410. HINFO: read udp 10.244.0.3:56867->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:36076 - 46058 "HINFO IN 1632789727683231193.7574983872383565410. udp 57 false 512" - - 0 2.000309947s
	[ERROR] plugin/errors: 2 1632789727683231193.7574983872383565410. HINFO: read udp 10.244.0.3:56170->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:51978 - 6824 "HINFO IN 1632789727683231193.7574983872383565410. udp 57 false 512" - - 0 2.000762239s
	[ERROR] plugin/errors: 2 1632789727683231193.7574983872383565410. HINFO: read udp 10.244.0.3:48136->10.0.2.3:53: i/o timeout
	[INFO] 127.0.0.1:58393 - 19962 "HINFO IN 1632789727683231193.7574983872383565410. udp 57 false 512" - - 0 2.000695737s
	[ERROR] plugin/errors: 2 1632789727683231193.7574983872383565410. HINFO: read udp 10.244.0.3:36630->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-156000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-156000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab57ba9f65fd4cb3ac8815e4f9baeeca5604e645
	                    minikube.k8s.io/name=running-upgrade-156000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_04T04_20_09_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 04 Mar 2024 12:20:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-156000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 04 Mar 2024 12:24:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 04 Mar 2024 12:20:09 +0000   Mon, 04 Mar 2024 12:20:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 04 Mar 2024 12:20:09 +0000   Mon, 04 Mar 2024 12:20:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 04 Mar 2024 12:20:09 +0000   Mon, 04 Mar 2024 12:20:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 04 Mar 2024 12:20:09 +0000   Mon, 04 Mar 2024 12:20:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-156000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 4670a6f4f0024c35996c54e61a694117
	  System UUID:                4670a6f4f0024c35996c54e61a694117
	  Boot ID:                    4c042e7e-3bac-4367-983d-f16409ccc5e1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-6qs2x                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m34s
	  kube-system                 coredns-6d4b75cb6d-gkrnw                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m34s
	  kube-system                 etcd-running-upgrade-156000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-apiserver-running-upgrade-156000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-controller-manager-running-upgrade-156000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-proxy-tqfrw                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 kube-scheduler-running-upgrade-156000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m32s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m53s (x5 over 4m53s)  kubelet          Node running-upgrade-156000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s (x4 over 4m53s)  kubelet          Node running-upgrade-156000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s (x4 over 4m53s)  kubelet          Node running-upgrade-156000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m47s                  kubelet          Node running-upgrade-156000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s                  kubelet          Node running-upgrade-156000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s                  kubelet          Node running-upgrade-156000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m47s                  kubelet          Node running-upgrade-156000 status is now: NodeReady
	  Normal  RegisteredNode           4m35s                  node-controller  Node running-upgrade-156000 event: Registered Node running-upgrade-156000 in Controller
	
	
	==> dmesg <==
	[  +0.071109] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +2.676086] systemd-fstab-generator[714]: Ignoring "noauto" for root device
	[  +1.864359] systemd-fstab-generator[874]: Ignoring "noauto" for root device
	[  +0.075767] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.079331] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +1.134633] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.090373] systemd-fstab-generator[1046]: Ignoring "noauto" for root device
	[  +0.088612] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +2.289469] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
	[ +14.679203] systemd-fstab-generator[1970]: Ignoring "noauto" for root device
	[  +2.549253] systemd-fstab-generator[2237]: Ignoring "noauto" for root device
	[  +0.243741] systemd-fstab-generator[2278]: Ignoring "noauto" for root device
	[  +0.077048] systemd-fstab-generator[2289]: Ignoring "noauto" for root device
	[  +0.094875] systemd-fstab-generator[2302]: Ignoring "noauto" for root device
	[  +2.521467] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.218597] systemd-fstab-generator[3049]: Ignoring "noauto" for root device
	[  +0.080311] systemd-fstab-generator[3063]: Ignoring "noauto" for root device
	[  +0.078139] systemd-fstab-generator[3074]: Ignoring "noauto" for root device
	[  +0.075227] systemd-fstab-generator[3088]: Ignoring "noauto" for root device
	[  +2.676860] systemd-fstab-generator[3242]: Ignoring "noauto" for root device
	[  +6.492102] systemd-fstab-generator[3764]: Ignoring "noauto" for root device
	[Mar 4 12:16] kauditd_printk_skb: 68 callbacks suppressed
	[Mar 4 12:20] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.606658] systemd-fstab-generator[11641]: Ignoring "noauto" for root device
	[  +5.646327] systemd-fstab-generator[12240]: Ignoring "noauto" for root device
	
	
	==> etcd [b8352f802244] <==
	{"level":"info","ts":"2024-03-04T12:20:04.751Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-04T12:20:04.751Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-04T12:20:04.751Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-03-04T12:20:04.751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-04T12:20:04.755Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-04T12:20:04.755Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-04T12:20:04.755Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-04T12:20:04.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-04T12:20:04.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-04T12:20:04.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-04T12:20:04.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-04T12:20:04.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-04T12:20:04.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-04T12:20:04.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-04T12:20:04.797Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-04T12:20:04.798Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-04T12:20:04.798Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-04T12:20:04.798Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-04T12:20:04.798Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-04T12:20:04.803Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-04T12:20:04.803Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-04T12:20:04.803Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-04T12:20:04.798Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-156000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-04T12:20:04.803Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-04T12:20:04.803Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:24:56 up 9 min,  0 users,  load average: 0.17, 0.25, 0.18
	Linux running-upgrade-156000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d612fc92cd59] <==
	I0304 12:20:06.147408       1 controller.go:611] quota admission added evaluator for: namespaces
	I0304 12:20:06.180992       1 cache.go:39] Caches are synced for autoregister controller
	I0304 12:20:06.181068       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0304 12:20:06.181097       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0304 12:20:06.181726       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0304 12:20:06.181833       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0304 12:20:06.183803       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0304 12:20:06.912227       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0304 12:20:07.084031       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0304 12:20:07.085223       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0304 12:20:07.085268       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0304 12:20:07.228855       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0304 12:20:07.242310       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0304 12:20:07.352074       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0304 12:20:07.354640       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0304 12:20:07.355005       1 controller.go:611] quota admission added evaluator for: endpoints
	I0304 12:20:07.356302       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0304 12:20:08.225354       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0304 12:20:08.877558       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0304 12:20:08.880406       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0304 12:20:08.885970       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0304 12:20:08.933463       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0304 12:20:22.040535       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0304 12:20:22.991003       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0304 12:20:23.703365       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [0e3ba9431b65] <==
	I0304 12:20:21.965670       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0304 12:20:21.987931       1 shared_informer.go:262] Caches are synced for PV protection
	I0304 12:20:21.987961       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0304 12:20:21.987977       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0304 12:20:21.988095       1 shared_informer.go:262] Caches are synced for GC
	I0304 12:20:21.988114       1 shared_informer.go:262] Caches are synced for endpoint
	I0304 12:20:21.988215       1 shared_informer.go:262] Caches are synced for service account
	I0304 12:20:21.988383       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0304 12:20:21.989376       1 shared_informer.go:262] Caches are synced for job
	I0304 12:20:22.038663       1 shared_informer.go:262] Caches are synced for expand
	I0304 12:20:22.041912       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0304 12:20:22.049870       1 shared_informer.go:262] Caches are synced for persistent volume
	I0304 12:20:22.062262       1 shared_informer.go:262] Caches are synced for ephemeral
	I0304 12:20:22.087499       1 shared_informer.go:262] Caches are synced for stateful set
	I0304 12:20:22.087521       1 shared_informer.go:262] Caches are synced for PVC protection
	I0304 12:20:22.094844       1 shared_informer.go:262] Caches are synced for resource quota
	I0304 12:20:22.138979       1 shared_informer.go:262] Caches are synced for daemon sets
	I0304 12:20:22.143091       1 shared_informer.go:262] Caches are synced for resource quota
	I0304 12:20:22.238892       1 shared_informer.go:262] Caches are synced for attach detach
	I0304 12:20:22.610999       1 shared_informer.go:262] Caches are synced for garbage collector
	I0304 12:20:22.665491       1 shared_informer.go:262] Caches are synced for garbage collector
	I0304 12:20:22.665507       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0304 12:20:22.942536       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-6qs2x"
	I0304 12:20:22.950371       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-gkrnw"
	I0304 12:20:22.995012       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tqfrw"
	
	
	==> kube-proxy [34fd4950fbd5] <==
	I0304 12:20:23.644732       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0304 12:20:23.644767       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0304 12:20:23.644781       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0304 12:20:23.696294       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0304 12:20:23.696308       1 server_others.go:206] "Using iptables Proxier"
	I0304 12:20:23.696416       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0304 12:20:23.699872       1 server.go:661] "Version info" version="v1.24.1"
	I0304 12:20:23.699879       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0304 12:20:23.700191       1 config.go:317] "Starting service config controller"
	I0304 12:20:23.700204       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0304 12:20:23.700211       1 config.go:226] "Starting endpoint slice config controller"
	I0304 12:20:23.700213       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0304 12:20:23.701964       1 config.go:444] "Starting node config controller"
	I0304 12:20:23.701974       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0304 12:20:23.801570       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0304 12:20:23.801666       1 shared_informer.go:262] Caches are synced for service config
	I0304 12:20:23.802154       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [c0cee593a1ee] <==
	W0304 12:20:06.151232       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0304 12:20:06.151257       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0304 12:20:06.151301       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0304 12:20:06.151331       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0304 12:20:06.151366       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0304 12:20:06.151386       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0304 12:20:06.151424       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0304 12:20:06.151449       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0304 12:20:06.151475       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0304 12:20:06.151494       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0304 12:20:06.151544       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0304 12:20:06.151569       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0304 12:20:06.151621       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0304 12:20:06.151667       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0304 12:20:06.151702       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0304 12:20:06.151741       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0304 12:20:07.022541       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0304 12:20:07.022588       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0304 12:20:07.147311       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0304 12:20:07.147429       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0304 12:20:07.172500       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0304 12:20:07.172680       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0304 12:20:07.184438       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0304 12:20:07.184528       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0304 12:20:09.745180       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-03-04 12:15:11 UTC, ends at Mon 2024-03-04 12:24:56 UTC. --
	Mar 04 12:20:10 running-upgrade-156000 kubelet[12246]: E0304 12:20:10.720774   12246 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-156000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-156000"
	Mar 04 12:20:10 running-upgrade-156000 kubelet[12246]: E0304 12:20:10.916143   12246 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-156000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-156000"
	Mar 04 12:20:11 running-upgrade-156000 kubelet[12246]: I0304 12:20:11.111128   12246 request.go:601] Waited for 1.114541392s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Mar 04 12:20:11 running-upgrade-156000 kubelet[12246]: E0304 12:20:11.116368   12246 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-156000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-156000"
	Mar 04 12:20:21 running-upgrade-156000 kubelet[12246]: I0304 12:20:21.941964   12246 topology_manager.go:200] "Topology Admit Handler"
	Mar 04 12:20:21 running-upgrade-156000 kubelet[12246]: I0304 12:20:21.994221   12246 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 04 12:20:21 running-upgrade-156000 kubelet[12246]: I0304 12:20:21.994343   12246 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkwm2\" (UniqueName: \"kubernetes.io/projected/823447f0-5717-4a73-a8ea-745c949fdd4d-kube-api-access-hkwm2\") pod \"storage-provisioner\" (UID: \"823447f0-5717-4a73-a8ea-745c949fdd4d\") " pod="kube-system/storage-provisioner"
	Mar 04 12:20:21 running-upgrade-156000 kubelet[12246]: I0304 12:20:21.994358   12246 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/823447f0-5717-4a73-a8ea-745c949fdd4d-tmp\") pod \"storage-provisioner\" (UID: \"823447f0-5717-4a73-a8ea-745c949fdd4d\") " pod="kube-system/storage-provisioner"
	Mar 04 12:20:21 running-upgrade-156000 kubelet[12246]: I0304 12:20:21.994691   12246 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 04 12:20:22 running-upgrade-156000 kubelet[12246]: E0304 12:20:22.097941   12246 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 04 12:20:22 running-upgrade-156000 kubelet[12246]: E0304 12:20:22.097961   12246 projected.go:192] Error preparing data for projected volume kube-api-access-hkwm2 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 04 12:20:22 running-upgrade-156000 kubelet[12246]: E0304 12:20:22.097993   12246 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/823447f0-5717-4a73-a8ea-745c949fdd4d-kube-api-access-hkwm2 podName:823447f0-5717-4a73-a8ea-745c949fdd4d nodeName:}" failed. No retries permitted until 2024-03-04 12:20:22.597980743 +0000 UTC m=+13.729841259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hkwm2" (UniqueName: "kubernetes.io/projected/823447f0-5717-4a73-a8ea-745c949fdd4d-kube-api-access-hkwm2") pod "storage-provisioner" (UID: "823447f0-5717-4a73-a8ea-745c949fdd4d") : configmap "kube-root-ca.crt" not found
	Mar 04 12:20:22 running-upgrade-156000 kubelet[12246]: I0304 12:20:22.943755   12246 topology_manager.go:200] "Topology Admit Handler"
	Mar 04 12:20:22 running-upgrade-156000 kubelet[12246]: I0304 12:20:22.960600   12246 topology_manager.go:200] "Topology Admit Handler"
	Mar 04 12:20:23 running-upgrade-156000 kubelet[12246]: I0304 12:20:23.000386   12246 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35141d2d-94d1-431b-a3a2-b48f2bb34d61-config-volume\") pod \"coredns-6d4b75cb6d-gkrnw\" (UID: \"35141d2d-94d1-431b-a3a2-b48f2bb34d61\") " pod="kube-system/coredns-6d4b75cb6d-gkrnw"
	Mar 04 12:20:23 running-upgrade-156000 kubelet[12246]: I0304 12:20:23.000476   12246 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxgjk\" (UniqueName: \"kubernetes.io/projected/35141d2d-94d1-431b-a3a2-b48f2bb34d61-kube-api-access-nxgjk\") pod \"coredns-6d4b75cb6d-gkrnw\" (UID: \"35141d2d-94d1-431b-a3a2-b48f2bb34d61\") " pod="kube-system/coredns-6d4b75cb6d-gkrnw"
	Mar 04 12:20:23 running-upgrade-156000 kubelet[12246]: I0304 12:20:23.000505   12246 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c824b6f-bacf-433e-8f51-3e612c692e10-config-volume\") pod \"coredns-6d4b75cb6d-6qs2x\" (UID: \"7c824b6f-bacf-433e-8f51-3e612c692e10\") " pod="kube-system/coredns-6d4b75cb6d-6qs2x"
	Mar 04 12:20:23 running-upgrade-156000 kubelet[12246]: I0304 12:20:23.000543   12246 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vngh2\" (UniqueName: \"kubernetes.io/projected/7c824b6f-bacf-433e-8f51-3e612c692e10-kube-api-access-vngh2\") pod \"coredns-6d4b75cb6d-6qs2x\" (UID: \"7c824b6f-bacf-433e-8f51-3e612c692e10\") " pod="kube-system/coredns-6d4b75cb6d-6qs2x"
	Mar 04 12:20:23 running-upgrade-156000 kubelet[12246]: I0304 12:20:23.005270   12246 topology_manager.go:200] "Topology Admit Handler"
	Mar 04 12:20:23 running-upgrade-156000 kubelet[12246]: I0304 12:20:23.101431   12246 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/406e1241-79ff-4561-9def-cf680eacc01f-xtables-lock\") pod \"kube-proxy-tqfrw\" (UID: \"406e1241-79ff-4561-9def-cf680eacc01f\") " pod="kube-system/kube-proxy-tqfrw"
	Mar 04 12:20:23 running-upgrade-156000 kubelet[12246]: I0304 12:20:23.101477   12246 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/406e1241-79ff-4561-9def-cf680eacc01f-lib-modules\") pod \"kube-proxy-tqfrw\" (UID: \"406e1241-79ff-4561-9def-cf680eacc01f\") " pod="kube-system/kube-proxy-tqfrw"
	Mar 04 12:20:23 running-upgrade-156000 kubelet[12246]: I0304 12:20:23.101500   12246 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/406e1241-79ff-4561-9def-cf680eacc01f-kube-proxy\") pod \"kube-proxy-tqfrw\" (UID: \"406e1241-79ff-4561-9def-cf680eacc01f\") " pod="kube-system/kube-proxy-tqfrw"
	Mar 04 12:20:23 running-upgrade-156000 kubelet[12246]: I0304 12:20:23.101510   12246 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5jdv\" (UniqueName: \"kubernetes.io/projected/406e1241-79ff-4561-9def-cf680eacc01f-kube-api-access-d5jdv\") pod \"kube-proxy-tqfrw\" (UID: \"406e1241-79ff-4561-9def-cf680eacc01f\") " pod="kube-system/kube-proxy-tqfrw"
	Mar 04 12:24:11 running-upgrade-156000 kubelet[12246]: I0304 12:24:11.751309   12246 scope.go:110] "RemoveContainer" containerID="d7ccee857a9c2cf9ede1a3fab633526d7abb3d26039f91333a5945d1f61c81e4"
	Mar 04 12:24:11 running-upgrade-156000 kubelet[12246]: I0304 12:24:11.781496   12246 scope.go:110] "RemoveContainer" containerID="bb28ce9fb69b381654486c66a0aad49e47dd03c919671367d41bb381ec400208"
	
	
	==> storage-provisioner [1ff2944a74d3] <==
	I0304 12:20:23.089647       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0304 12:20:23.094731       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0304 12:20:23.095031       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0304 12:20:23.098497       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0304 12:20:23.098630       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ce54882-1334-40ae-a323-4aa20d84e2e4", APIVersion:"v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-156000_fb04e206-68a1-407a-bff4-095cdf543bd7 became leader
	I0304 12:20:23.098656       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-156000_fb04e206-68a1-407a-bff4-095cdf543bd7!
	I0304 12:20:23.200079       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-156000_fb04e206-68a1-407a-bff4-095cdf543bd7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-156000 -n running-upgrade-156000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-156000 -n running-upgrade-156000: exit status 2 (15.578540584s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-156000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-156000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-156000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-156000: (2.209799541s)
--- FAIL: TestRunningBinaryUpgrade (658.52s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-323000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-323000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.14233825s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-323000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-323000 in cluster kubernetes-upgrade-323000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-323000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:17:16.177752   17267 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:17:16.177881   17267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:17:16.177884   17267 out.go:304] Setting ErrFile to fd 2...
	I0304 04:17:16.177887   17267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:17:16.178026   17267 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:17:16.179101   17267 out.go:298] Setting JSON to false
	I0304 04:17:16.195420   17267 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10008,"bootTime":1709544628,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:17:16.195481   17267 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:17:16.202271   17267 out.go:177] * [kubernetes-upgrade-323000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:17:16.209227   17267 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:17:16.213307   17267 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:17:16.209291   17267 notify.go:220] Checking for updates...
	I0304 04:17:16.219082   17267 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:17:16.222200   17267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:17:16.225224   17267 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:17:16.226601   17267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:17:16.230568   17267 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:17:16.230632   17267 config.go:182] Loaded profile config "running-upgrade-156000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:17:16.230686   17267 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:17:16.235150   17267 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:17:16.241201   17267 start.go:299] selected driver: qemu2
	I0304 04:17:16.241207   17267 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:17:16.241213   17267 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:17:16.243370   17267 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:17:16.246209   17267 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:17:16.249269   17267 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0304 04:17:16.249302   17267 cni.go:84] Creating CNI manager for ""
	I0304 04:17:16.249308   17267 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0304 04:17:16.249318   17267 start_flags.go:323] config:
	{Name:kubernetes-upgrade-323000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-323000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:17:16.253574   17267 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:17:16.261222   17267 out.go:177] * Starting control plane node kubernetes-upgrade-323000 in cluster kubernetes-upgrade-323000
	I0304 04:17:16.265127   17267 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0304 04:17:16.265145   17267 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0304 04:17:16.265155   17267 cache.go:56] Caching tarball of preloaded images
	I0304 04:17:16.265208   17267 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:17:16.265213   17267 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0304 04:17:16.265286   17267 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/kubernetes-upgrade-323000/config.json ...
	I0304 04:17:16.265297   17267 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/kubernetes-upgrade-323000/config.json: {Name:mk3f341238cae505c8e0559d830581794e4a13fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:17:16.265507   17267 start.go:365] acquiring machines lock for kubernetes-upgrade-323000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:17:16.265541   17267 start.go:369] acquired machines lock for "kubernetes-upgrade-323000" in 23.25µs
	I0304 04:17:16.265550   17267 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-323000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:17:16.265582   17267 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:17:16.273194   17267 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:17:16.295439   17267 start.go:159] libmachine.API.Create for "kubernetes-upgrade-323000" (driver="qemu2")
	I0304 04:17:16.295468   17267 client.go:168] LocalClient.Create starting
	I0304 04:17:16.295560   17267 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:17:16.295591   17267 main.go:141] libmachine: Decoding PEM data...
	I0304 04:17:16.295600   17267 main.go:141] libmachine: Parsing certificate...
	I0304 04:17:16.295639   17267 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:17:16.295661   17267 main.go:141] libmachine: Decoding PEM data...
	I0304 04:17:16.295668   17267 main.go:141] libmachine: Parsing certificate...
	I0304 04:17:16.295993   17267 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:17:16.672563   17267 main.go:141] libmachine: Creating SSH key...
	I0304 04:17:16.735160   17267 main.go:141] libmachine: Creating Disk image...
	I0304 04:17:16.735166   17267 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:17:16.735371   17267 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2
	I0304 04:17:16.759450   17267 main.go:141] libmachine: STDOUT: 
	I0304 04:17:16.759475   17267 main.go:141] libmachine: STDERR: 
	I0304 04:17:16.759537   17267 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2 +20000M
	I0304 04:17:16.770586   17267 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:17:16.770604   17267 main.go:141] libmachine: STDERR: 
	I0304 04:17:16.770625   17267 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2
	I0304 04:17:16.770631   17267 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:17:16.770660   17267 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b4:c0:65:2f:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2
	I0304 04:17:16.772442   17267 main.go:141] libmachine: STDOUT: 
	I0304 04:17:16.772456   17267 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:17:16.772475   17267 client.go:171] LocalClient.Create took 477.004959ms
	I0304 04:17:18.774582   17267 start.go:128] duration metric: createHost completed in 2.5089985s
	I0304 04:17:18.774633   17267 start.go:83] releasing machines lock for "kubernetes-upgrade-323000", held for 2.509101958s
	W0304 04:17:18.774662   17267 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:17:18.784705   17267 out.go:177] * Deleting "kubernetes-upgrade-323000" in qemu2 ...
	W0304 04:17:18.806544   17267 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:17:18.806554   17267 start.go:709] Will try again in 5 seconds ...
	I0304 04:17:23.808712   17267 start.go:365] acquiring machines lock for kubernetes-upgrade-323000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:17:23.808923   17267 start.go:369] acquired machines lock for "kubernetes-upgrade-323000" in 136.375µs
	I0304 04:17:23.808970   17267 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-323000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:17:23.809053   17267 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:17:23.813163   17267 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:17:23.832203   17267 start.go:159] libmachine.API.Create for "kubernetes-upgrade-323000" (driver="qemu2")
	I0304 04:17:23.832233   17267 client.go:168] LocalClient.Create starting
	I0304 04:17:23.832296   17267 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:17:23.832348   17267 main.go:141] libmachine: Decoding PEM data...
	I0304 04:17:23.832355   17267 main.go:141] libmachine: Parsing certificate...
	I0304 04:17:23.832395   17267 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:17:23.832421   17267 main.go:141] libmachine: Decoding PEM data...
	I0304 04:17:23.832428   17267 main.go:141] libmachine: Parsing certificate...
	I0304 04:17:23.832737   17267 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:17:23.976986   17267 main.go:141] libmachine: Creating SSH key...
	I0304 04:17:24.221372   17267 main.go:141] libmachine: Creating Disk image...
	I0304 04:17:24.221384   17267 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:17:24.221939   17267 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2
	I0304 04:17:24.235808   17267 main.go:141] libmachine: STDOUT: 
	I0304 04:17:24.235835   17267 main.go:141] libmachine: STDERR: 
	I0304 04:17:24.235925   17267 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2 +20000M
	I0304 04:17:24.247339   17267 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:17:24.247416   17267 main.go:141] libmachine: STDERR: 
	I0304 04:17:24.247431   17267 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2
	I0304 04:17:24.247437   17267 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:17:24.247464   17267 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:03:13:5c:b7:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2
	I0304 04:17:24.249314   17267 main.go:141] libmachine: STDOUT: 
	I0304 04:17:24.249356   17267 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:17:24.249368   17267 client.go:171] LocalClient.Create took 417.134917ms
	I0304 04:17:26.249925   17267 start.go:128] duration metric: createHost completed in 2.440836s
	I0304 04:17:26.250022   17267 start.go:83] releasing machines lock for "kubernetes-upgrade-323000", held for 2.4411s
	W0304 04:17:26.250412   17267 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-323000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-323000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:17:26.260829   17267 out.go:177] 
	W0304 04:17:26.264966   17267 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:17:26.264990   17267 out.go:239] * 
	* 
	W0304 04:17:26.266538   17267 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:17:26.275826   17267 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-323000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-323000
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-323000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-323000 status --format={{.Host}}: exit status 7 (34.535084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-323000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-323000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.1734475s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-323000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-323000 in cluster kubernetes-upgrade-323000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-323000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-323000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:17:26.447752   17286 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:17:26.447866   17286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:17:26.447870   17286 out.go:304] Setting ErrFile to fd 2...
	I0304 04:17:26.447872   17286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:17:26.448006   17286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:17:26.448962   17286 out.go:298] Setting JSON to false
	I0304 04:17:26.465242   17286 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10018,"bootTime":1709544628,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:17:26.465304   17286 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:17:26.469804   17286 out.go:177] * [kubernetes-upgrade-323000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:17:26.472774   17286 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:17:26.476684   17286 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:17:26.472868   17286 notify.go:220] Checking for updates...
	I0304 04:17:26.482741   17286 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:17:26.485707   17286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:17:26.488764   17286 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:17:26.491738   17286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:17:26.495069   17286 config.go:182] Loaded profile config "kubernetes-upgrade-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0304 04:17:26.495321   17286 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:17:26.499777   17286 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:17:26.506735   17286 start.go:299] selected driver: qemu2
	I0304 04:17:26.506742   17286 start.go:903] validating driver "qemu2" against &{Name:kubernetes-upgrade-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-323000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:17:26.506816   17286 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:17:26.509126   17286 cni.go:84] Creating CNI manager for ""
	I0304 04:17:26.509148   17286 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:17:26.509153   17286 start_flags.go:323] config:
	{Name:kubernetes-upgrade-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-32300
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:17:26.513545   17286 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:17:26.520717   17286 out.go:177] * Starting control plane node kubernetes-upgrade-323000 in cluster kubernetes-upgrade-323000
	I0304 04:17:26.524755   17286 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0304 04:17:26.524770   17286 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0304 04:17:26.524779   17286 cache.go:56] Caching tarball of preloaded images
	I0304 04:17:26.524836   17286 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:17:26.524842   17286 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0304 04:17:26.524919   17286 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/kubernetes-upgrade-323000/config.json ...
	I0304 04:17:26.525425   17286 start.go:365] acquiring machines lock for kubernetes-upgrade-323000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:17:26.525449   17286 start.go:369] acquired machines lock for "kubernetes-upgrade-323000" in 17.791µs
	I0304 04:17:26.525456   17286 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:17:26.525460   17286 fix.go:54] fixHost starting: 
	I0304 04:17:26.525568   17286 fix.go:102] recreateIfNeeded on kubernetes-upgrade-323000: state=Stopped err=<nil>
	W0304 04:17:26.525580   17286 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:17:26.532789   17286 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-323000" ...
	I0304 04:17:26.536763   17286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:03:13:5c:b7:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2
	I0304 04:17:26.538745   17286 main.go:141] libmachine: STDOUT: 
	I0304 04:17:26.538776   17286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:17:26.538803   17286 fix.go:56] fixHost completed within 13.342917ms
	I0304 04:17:26.538807   17286 start.go:83] releasing machines lock for "kubernetes-upgrade-323000", held for 13.355292ms
	W0304 04:17:26.538814   17286 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:17:26.538844   17286 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:17:26.538848   17286 start.go:709] Will try again in 5 seconds ...
	I0304 04:17:31.540649   17286 start.go:365] acquiring machines lock for kubernetes-upgrade-323000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:17:31.540837   17286 start.go:369] acquired machines lock for "kubernetes-upgrade-323000" in 135.417µs
	I0304 04:17:31.540877   17286 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:17:31.540887   17286 fix.go:54] fixHost starting: 
	I0304 04:17:31.541237   17286 fix.go:102] recreateIfNeeded on kubernetes-upgrade-323000: state=Stopped err=<nil>
	W0304 04:17:31.541250   17286 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:17:31.545974   17286 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-323000" ...
	I0304 04:17:31.554249   17286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:03:13:5c:b7:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubernetes-upgrade-323000/disk.qcow2
	I0304 04:17:31.559879   17286 main.go:141] libmachine: STDOUT: 
	I0304 04:17:31.559922   17286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:17:31.559972   17286 fix.go:56] fixHost completed within 19.084833ms
	I0304 04:17:31.559982   17286 start.go:83] releasing machines lock for "kubernetes-upgrade-323000", held for 19.132083ms
	W0304 04:17:31.560081   17286 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-323000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-323000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:17:31.567117   17286 out.go:177] 
	W0304 04:17:31.568600   17286 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:17:31.568624   17286 out.go:239] * 
	* 
	W0304 04:17:31.569902   17286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:17:31.581056   17286 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-323000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-323000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-323000 version --output=json: exit status 1 (52.428958ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-323000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-04 04:17:31.645631 -0800 PST m=+798.242455460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-323000 -n kubernetes-upgrade-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-323000 -n kubernetes-upgrade-323000: exit status 7 (34.2375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-323000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-323000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-323000
--- FAIL: TestKubernetesUpgrade (15.64s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.44s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18284
- KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current775780491/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.44s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.42s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18284
- KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3599637879/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (611.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1199387510 start -p stopped-upgrade-289000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1199387510 start -p stopped-upgrade-289000 --memory=2200 --vm-driver=qemu2 : (45.348042833s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1199387510 -p stopped-upgrade-289000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1199387510 -p stopped-upgrade-289000 stop: (12.111798584s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-289000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-289000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9m13.547874166s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-289000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting control plane node stopped-upgrade-289000 in cluster stopped-upgrade-289000
	* Restarting existing qemu2 VM for "stopped-upgrade-289000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying Kubernetes components...
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:18:34.280959   17343 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:18:34.281131   17343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:18:34.281135   17343 out.go:304] Setting ErrFile to fd 2...
	I0304 04:18:34.281138   17343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:18:34.281291   17343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:18:34.282679   17343 out.go:298] Setting JSON to false
	I0304 04:18:34.303418   17343 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10086,"bootTime":1709544628,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:18:34.303492   17343 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:18:34.307863   17343 out.go:177] * [stopped-upgrade-289000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:18:34.314881   17343 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:18:34.315000   17343 notify.go:220] Checking for updates...
	I0304 04:18:34.318805   17343 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:18:34.321896   17343 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:18:34.324879   17343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:18:34.327859   17343 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:18:34.330877   17343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:18:34.334097   17343 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:18:34.336833   17343 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0304 04:18:34.339877   17343 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:18:34.343684   17343 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:18:34.350801   17343 start.go:299] selected driver: qemu2
	I0304 04:18:34.350811   17343 start.go:903] validating driver "qemu2" against &{Name:stopped-upgrade-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52792 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:sto
pped-upgrade-289000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0304 04:18:34.350874   17343 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:18:34.353717   17343 cni.go:84] Creating CNI manager for ""
	I0304 04:18:34.353741   17343 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:18:34.353746   17343 start_flags.go:323] config:
	{Name:stopped-upgrade-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52792 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-289000 Namespace:default APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0304 04:18:34.353838   17343 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:18:34.360803   17343 out.go:177] * Starting control plane node stopped-upgrade-289000 in cluster stopped-upgrade-289000
	I0304 04:18:34.364771   17343 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0304 04:18:34.364804   17343 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0304 04:18:34.364811   17343 cache.go:56] Caching tarball of preloaded images
	I0304 04:18:34.364890   17343 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:18:34.364897   17343 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.1 on docker
	I0304 04:18:34.364962   17343 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/config.json ...
	I0304 04:18:34.365330   17343 start.go:365] acquiring machines lock for stopped-upgrade-289000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:18:34.365370   17343 start.go:369] acquired machines lock for "stopped-upgrade-289000" in 32.083µs
	I0304 04:18:34.365383   17343 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:18:34.365387   17343 fix.go:54] fixHost starting: 
	I0304 04:18:34.365499   17343 fix.go:102] recreateIfNeeded on stopped-upgrade-289000: state=Stopped err=<nil>
	W0304 04:18:34.365509   17343 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:18:34.368878   17343 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-289000" ...
	I0304 04:18:34.376903   17343 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52757-:22,hostfwd=tcp::52758-:2376,hostname=stopped-upgrade-289000 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/disk.qcow2
	I0304 04:18:34.426347   17343 main.go:141] libmachine: STDOUT: 
	I0304 04:18:34.426374   17343 main.go:141] libmachine: STDERR: 
	I0304 04:18:34.426380   17343 main.go:141] libmachine: Waiting for VM to start (ssh -p 52757 docker@127.0.0.1)...
	I0304 04:18:54.876178   17343 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/config.json ...
	I0304 04:18:54.876837   17343 machine.go:88] provisioning docker machine ...
	I0304 04:18:54.876909   17343 buildroot.go:166] provisioning hostname "stopped-upgrade-289000"
	I0304 04:18:54.877037   17343 main.go:141] libmachine: Using SSH client type: native
	I0304 04:18:54.877463   17343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101259a30] 0x10125c290 <nil>  [] 0s} localhost 52757 <nil> <nil>}
	I0304 04:18:54.877479   17343 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-289000 && echo "stopped-upgrade-289000" | sudo tee /etc/hostname
	I0304 04:18:54.977148   17343 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-289000
	
	I0304 04:18:54.977275   17343 main.go:141] libmachine: Using SSH client type: native
	I0304 04:18:54.977471   17343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101259a30] 0x10125c290 <nil>  [] 0s} localhost 52757 <nil> <nil>}
	I0304 04:18:54.977483   17343 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-289000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-289000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-289000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0304 04:18:55.058599   17343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0304 04:18:55.058614   17343 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18284-15061/.minikube CaCertPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18284-15061/.minikube}
	I0304 04:18:55.058634   17343 buildroot.go:174] setting up certificates
	I0304 04:18:55.058645   17343 provision.go:83] configureAuth start
	I0304 04:18:55.058650   17343 provision.go:138] copyHostCerts
	I0304 04:18:55.058737   17343 exec_runner.go:144] found /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.pem, removing ...
	I0304 04:18:55.058748   17343 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.pem
	I0304 04:18:55.058879   17343 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.pem (1082 bytes)
	I0304 04:18:55.059106   17343 exec_runner.go:144] found /Users/jenkins/minikube-integration/18284-15061/.minikube/cert.pem, removing ...
	I0304 04:18:55.059111   17343 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18284-15061/.minikube/cert.pem
	I0304 04:18:55.059182   17343 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18284-15061/.minikube/cert.pem (1123 bytes)
	I0304 04:18:55.059329   17343 exec_runner.go:144] found /Users/jenkins/minikube-integration/18284-15061/.minikube/key.pem, removing ...
	I0304 04:18:55.059334   17343 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18284-15061/.minikube/key.pem
	I0304 04:18:55.059444   17343 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18284-15061/.minikube/key.pem (1679 bytes)
	I0304 04:18:55.059559   17343 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-289000 san=[127.0.0.1 localhost localhost 127.0.0.1 minikube stopped-upgrade-289000]
	I0304 04:18:55.131470   17343 provision.go:172] copyRemoteCerts
	I0304 04:18:55.131504   17343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0304 04:18:55.131512   17343 sshutil.go:53] new ssh client: &{IP:localhost Port:52757 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa Username:docker}
	I0304 04:18:55.168365   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0304 04:18:55.175413   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0304 04:18:55.182536   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0304 04:18:55.189143   17343 provision.go:86] duration metric: configureAuth took 130.489917ms
	I0304 04:18:55.189155   17343 buildroot.go:189] setting minikube options for container-runtime
	I0304 04:18:55.189250   17343 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:18:55.189287   17343 main.go:141] libmachine: Using SSH client type: native
	I0304 04:18:55.189373   17343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101259a30] 0x10125c290 <nil>  [] 0s} localhost 52757 <nil> <nil>}
	I0304 04:18:55.189378   17343 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0304 04:18:55.262233   17343 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0304 04:18:55.262242   17343 buildroot.go:70] root file system type: tmpfs
	I0304 04:18:55.262293   17343 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0304 04:18:55.262338   17343 main.go:141] libmachine: Using SSH client type: native
	I0304 04:18:55.262440   17343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101259a30] 0x10125c290 <nil>  [] 0s} localhost 52757 <nil> <nil>}
	I0304 04:18:55.262475   17343 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0304 04:18:55.337253   17343 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0304 04:18:55.337315   17343 main.go:141] libmachine: Using SSH client type: native
	I0304 04:18:55.337428   17343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101259a30] 0x10125c290 <nil>  [] 0s} localhost 52757 <nil> <nil>}
	I0304 04:18:55.337436   17343 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0304 04:18:55.696107   17343 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0304 04:18:55.696121   17343 machine.go:91] provisioned docker machine in 819.279583ms
	I0304 04:18:55.696130   17343 start.go:300] post-start starting for "stopped-upgrade-289000" (driver="qemu2")
	I0304 04:18:55.696137   17343 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0304 04:18:55.696200   17343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0304 04:18:55.696209   17343 sshutil.go:53] new ssh client: &{IP:localhost Port:52757 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa Username:docker}
	I0304 04:18:55.736673   17343 ssh_runner.go:195] Run: cat /etc/os-release
	I0304 04:18:55.737929   17343 info.go:137] Remote host: Buildroot 2021.02.12
	I0304 04:18:55.737937   17343 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18284-15061/.minikube/addons for local assets ...
	I0304 04:18:55.738021   17343 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18284-15061/.minikube/files for local assets ...
	I0304 04:18:55.738142   17343 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/154862.pem -> 154862.pem in /etc/ssl/certs
	I0304 04:18:55.738266   17343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0304 04:18:55.740712   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/154862.pem --> /etc/ssl/certs/154862.pem (1708 bytes)
	I0304 04:18:55.747646   17343 start.go:303] post-start completed in 51.51075ms
	I0304 04:18:55.747653   17343 fix.go:56] fixHost completed within 21.382393625s
	I0304 04:18:55.747688   17343 main.go:141] libmachine: Using SSH client type: native
	I0304 04:18:55.747786   17343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101259a30] 0x10125c290 <nil>  [] 0s} localhost 52757 <nil> <nil>}
	I0304 04:18:55.747790   17343 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0304 04:18:55.820195   17343 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709554736.280054837
	
	I0304 04:18:55.820203   17343 fix.go:206] guest clock: 1709554736.280054837
	I0304 04:18:55.820207   17343 fix.go:219] Guest: 2024-03-04 04:18:56.280054837 -0800 PST Remote: 2024-03-04 04:18:55.747655 -0800 PST m=+21.501023126 (delta=532.399837ms)
	I0304 04:18:55.820220   17343 fix.go:190] guest clock delta is within tolerance: 532.399837ms
	I0304 04:18:55.820223   17343 start.go:83] releasing machines lock for "stopped-upgrade-289000", held for 21.454972084s
	I0304 04:18:55.820297   17343 ssh_runner.go:195] Run: cat /version.json
	I0304 04:18:55.820306   17343 sshutil.go:53] new ssh client: &{IP:localhost Port:52757 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa Username:docker}
	I0304 04:18:55.820362   17343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0304 04:18:55.820403   17343 sshutil.go:53] new ssh client: &{IP:localhost Port:52757 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa Username:docker}
	W0304 04:18:55.821007   17343 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52757: connect: connection refused
	I0304 04:18:55.821031   17343 retry.go:31] will retry after 181.279797ms: dial tcp [::1]:52757: connect: connection refused
	W0304 04:18:56.053756   17343 start.go:420] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0304 04:18:56.053936   17343 ssh_runner.go:195] Run: systemctl --version
	I0304 04:18:56.057890   17343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0304 04:18:56.061113   17343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0304 04:18:56.061174   17343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0304 04:18:56.066392   17343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0304 04:18:56.075676   17343 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0304 04:18:56.075691   17343 start.go:475] detecting cgroup driver to use...
	I0304 04:18:56.075804   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0304 04:18:56.086301   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0304 04:18:56.090299   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0304 04:18:56.093880   17343 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0304 04:18:56.093911   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0304 04:18:56.097576   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0304 04:18:56.100987   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0304 04:18:56.104315   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0304 04:18:56.107940   17343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0304 04:18:56.111272   17343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0304 04:18:56.114352   17343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0304 04:18:56.116932   17343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0304 04:18:56.119804   17343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:18:56.190473   17343 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0304 04:18:56.197378   17343 start.go:475] detecting cgroup driver to use...
	I0304 04:18:56.197445   17343 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0304 04:18:56.205782   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0304 04:18:56.211671   17343 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0304 04:18:56.218790   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0304 04:18:56.223770   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0304 04:18:56.229387   17343 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0304 04:18:56.291148   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0304 04:18:56.296603   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0304 04:18:56.302055   17343 ssh_runner.go:195] Run: which cri-dockerd
	I0304 04:18:56.303393   17343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0304 04:18:56.306479   17343 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0304 04:18:56.311422   17343 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0304 04:18:56.375734   17343 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0304 04:18:56.439015   17343 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0304 04:18:56.439090   17343 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0304 04:18:56.444516   17343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:18:56.508182   17343 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0304 04:18:57.673811   17343 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.16562025s)
	I0304 04:18:57.673886   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0304 04:18:57.679732   17343 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0304 04:18:57.686869   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0304 04:18:57.692148   17343 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0304 04:18:57.765724   17343 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0304 04:18:57.831788   17343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:18:57.895817   17343 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0304 04:18:57.903212   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0304 04:18:57.908509   17343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:18:57.979873   17343 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0304 04:18:58.023820   17343 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0304 04:18:58.023895   17343 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0304 04:18:58.026434   17343 start.go:543] Will wait 60s for crictl version
	I0304 04:18:58.026488   17343 ssh_runner.go:195] Run: which crictl
	I0304 04:18:58.028583   17343 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0304 04:18:58.045736   17343 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0304 04:18:58.045819   17343 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0304 04:18:58.065004   17343 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0304 04:18:58.089864   17343 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0304 04:18:58.089940   17343 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0304 04:18:58.091793   17343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0304 04:18:58.096612   17343 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0304 04:18:58.096659   17343 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0304 04:18:58.108165   17343 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0304 04:18:58.108174   17343 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0304 04:18:58.108225   17343 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0304 04:18:58.112004   17343 ssh_runner.go:195] Run: which lz4
	I0304 04:18:58.113378   17343 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0304 04:18:58.114637   17343 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0304 04:18:58.114648   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0304 04:18:58.863743   17343 docker.go:649] Took 0.750397 seconds to copy over tarball
	I0304 04:18:58.863817   17343 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0304 04:19:00.084137   17343 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.220303708s)
	I0304 04:19:00.084154   17343 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0304 04:19:00.101057   17343 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0304 04:19:00.104392   17343 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0304 04:19:00.109690   17343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0304 04:19:00.172332   17343 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0304 04:19:01.689812   17343 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.517470042s)
	I0304 04:19:01.689921   17343 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0304 04:19:01.703936   17343 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0304 04:19:01.703946   17343 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0304 04:19:01.703951   17343 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0304 04:19:01.744618   17343 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:19:01.745739   17343 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:19:01.745847   17343 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:19:01.746044   17343 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:19:01.746209   17343 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0304 04:19:01.746266   17343 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0304 04:19:01.747383   17343 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:19:01.747684   17343 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:19:01.758812   17343 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:19:01.758882   17343 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:19:01.761359   17343 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0304 04:19:01.762044   17343 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:19:01.762121   17343 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:19:01.762189   17343 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:19:01.762308   17343 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0304 04:19:01.762332   17343 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:19:03.674807   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:19:03.689736   17343 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0304 04:19:03.689768   17343 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:19:03.689834   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0304 04:19:03.701783   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0304 04:19:03.754111   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0304 04:19:03.766612   17343 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0304 04:19:03.766630   17343 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0304 04:19:03.766678   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0304 04:19:03.777775   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0304 04:19:03.792330   17343 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0304 04:19:03.792447   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:19:03.793526   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:19:03.798153   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0304 04:19:03.804014   17343 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0304 04:19:03.804036   17343 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:19:03.804086   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0304 04:19:03.808167   17343 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0304 04:19:03.808187   17343 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:19:03.808240   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0304 04:19:03.808958   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:19:03.820483   17343 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0304 04:19:03.820505   17343 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0304 04:19:03.820564   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0304 04:19:03.820666   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:19:03.822213   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0304 04:19:03.822302   17343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0304 04:19:03.828578   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0304 04:19:03.852213   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0304 04:19:03.852231   17343 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0304 04:19:03.852251   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0304 04:19:03.852309   17343 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0304 04:19:03.852220   17343 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0304 04:19:03.852325   17343 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:19:03.852328   17343 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:19:03.852338   17343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0304 04:19:03.852363   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0304 04:19:03.852366   17343 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0304 04:19:03.879861   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0304 04:19:03.889412   17343 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0304 04:19:03.889434   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0304 04:19:03.889439   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0304 04:19:03.903613   17343 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0304 04:19:03.903626   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0304 04:19:03.940590   17343 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0304 04:19:03.940612   17343 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0304 04:19:03.940618   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0304 04:19:03.966652   17343 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0304 04:19:04.347356   17343 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0304 04:19:04.347919   17343 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:19:04.387306   17343 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0304 04:19:04.387348   17343 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:19:04.387453   17343 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:19:04.413549   17343 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0304 04:19:04.413688   17343 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0304 04:19:04.415743   17343 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0304 04:19:04.415759   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0304 04:19:04.445589   17343 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0304 04:19:04.445605   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0304 04:19:04.701490   17343 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0304 04:19:04.701530   17343 cache_images.go:92] LoadImages completed in 2.997589041s
	W0304 04:19:04.701573   17343 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0304 04:19:04.701634   17343 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0304 04:19:04.714704   17343 cni.go:84] Creating CNI manager for ""
	I0304 04:19:04.714717   17343 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:19:04.714725   17343 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0304 04:19:04.714734   17343 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-289000 NodeName:stopped-upgrade-289000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0304 04:19:04.714800   17343 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-289000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0304 04:19:04.714834   17343 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-289000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-289000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0304 04:19:04.714885   17343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0304 04:19:04.718045   17343 binaries.go:44] Found k8s binaries, skipping transfer
	I0304 04:19:04.718075   17343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0304 04:19:04.720602   17343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0304 04:19:04.725871   17343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0304 04:19:04.730680   17343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0304 04:19:04.736214   17343 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0304 04:19:04.737518   17343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0304 04:19:04.740850   17343 certs.go:56] Setting up /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000 for IP: 10.0.2.15
	I0304 04:19:04.740861   17343 certs.go:190] acquiring lock for shared ca certs: {Name:mk261f788a3b9cd088f9e587f9da53d875f26951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:19:04.740997   17343 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.key
	I0304 04:19:04.741322   17343 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/proxy-client-ca.key
	I0304 04:19:04.741597   17343 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.key
	I0304 04:19:04.741741   17343 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/apiserver.key.49504c3e
	I0304 04:19:04.741848   17343 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/proxy-client.key
	I0304 04:19:04.741986   17343 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/15486.pem (1338 bytes)
	W0304 04:19:04.742136   17343 certs.go:433] ignoring /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/15486_empty.pem, impossibly tiny 0 bytes
	I0304 04:19:04.742143   17343 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca-key.pem (1675 bytes)
	I0304 04:19:04.742179   17343 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem (1082 bytes)
	I0304 04:19:04.742199   17343 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem (1123 bytes)
	I0304 04:19:04.742225   17343 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/certs/key.pem (1679 bytes)
	I0304 04:19:04.742265   17343 certs.go:437] found cert: /Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/154862.pem (1708 bytes)
	I0304 04:19:04.742589   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0304 04:19:04.749530   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0304 04:19:04.756656   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0304 04:19:04.763529   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0304 04:19:04.770257   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0304 04:19:04.776937   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0304 04:19:04.783478   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0304 04:19:04.790449   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0304 04:19:04.797013   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/15486.pem --> /usr/share/ca-certificates/15486.pem (1338 bytes)
	I0304 04:19:04.803687   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/ssl/certs/154862.pem --> /usr/share/ca-certificates/154862.pem (1708 bytes)
	I0304 04:19:04.810821   17343 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0304 04:19:04.817603   17343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0304 04:19:04.822782   17343 ssh_runner.go:195] Run: openssl version
	I0304 04:19:04.824788   17343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15486.pem && ln -fs /usr/share/ca-certificates/15486.pem /etc/ssl/certs/15486.pem"
	I0304 04:19:04.828161   17343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15486.pem
	I0304 04:19:04.829656   17343 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Mar  4 12:05 /usr/share/ca-certificates/15486.pem
	I0304 04:19:04.829675   17343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15486.pem
	I0304 04:19:04.831370   17343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15486.pem /etc/ssl/certs/51391683.0"
	I0304 04:19:04.834728   17343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154862.pem && ln -fs /usr/share/ca-certificates/154862.pem /etc/ssl/certs/154862.pem"
	I0304 04:19:04.837705   17343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154862.pem
	I0304 04:19:04.839089   17343 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Mar  4 12:05 /usr/share/ca-certificates/154862.pem
	I0304 04:19:04.839109   17343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154862.pem
	I0304 04:19:04.841008   17343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154862.pem /etc/ssl/certs/3ec20f2e.0"
	I0304 04:19:04.844149   17343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0304 04:19:04.847534   17343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0304 04:19:04.849236   17343 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Mar  4 12:15 /usr/share/ca-certificates/minikubeCA.pem
	I0304 04:19:04.849261   17343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0304 04:19:04.851013   17343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0304 04:19:04.854121   17343 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0304 04:19:04.855563   17343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0304 04:19:04.858599   17343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0304 04:19:04.860559   17343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0304 04:19:04.863084   17343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0304 04:19:04.864951   17343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0304 04:19:04.866736   17343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0304 04:19:04.868665   17343 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-289000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52792 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 Clus
terName:stopped-upgrade-289000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0304 04:19:04.868729   17343 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0304 04:19:04.878587   17343 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0304 04:19:04.881580   17343 host.go:66] Checking if "stopped-upgrade-289000" exists ...
	I0304 04:19:04.882439   17343 main.go:141] libmachine: Using SSH client type: external
	I0304 04:19:04.882457   17343 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa (-rw-------)
	I0304 04:19:04.882474   17343 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa -p 52757] /usr/bin/ssh <nil>}
	I0304 04:19:04.882488   17343 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa -p 52757 -f -NTL 52792:localhost:8443
	I0304 04:19:04.927612   17343 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0304 04:19:04.927702   17343 kubeadm.go:636] restartCluster start
	I0304 04:19:04.927758   17343 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0304 04:19:04.931457   17343 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0304 04:19:04.931835   17343 kubeconfig.go:135] verify returned: extract IP: "stopped-upgrade-289000" does not appear in /Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:19:04.931938   17343 kubeconfig.go:146] "stopped-upgrade-289000" context is missing from /Users/jenkins/minikube-integration/18284-15061/kubeconfig - will repair!
	I0304 04:19:04.932180   17343 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/kubeconfig: {Name:mkd9e78edd5ce89511d1f03c76ad35ee3697edbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:19:04.932676   17343 kapi.go:59] client config for stopped-upgrade-289000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.key", CAFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10254f7d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0304 04:19:04.933184   17343 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0304 04:19:04.935919   17343 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-289000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0304 04:19:04.935924   17343 kubeadm.go:1135] stopping kube-system containers ...
	I0304 04:19:04.935959   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0304 04:19:04.946867   17343 docker.go:483] Stopping containers: [0c27c99061a8 331d1cec5665 68d9e42070f0 a8a74fac7389 375c7c379b12 1385b50317f7 97c67652317e a736c2fdf75e]
	I0304 04:19:04.946961   17343 ssh_runner.go:195] Run: docker stop 0c27c99061a8 331d1cec5665 68d9e42070f0 a8a74fac7389 375c7c379b12 1385b50317f7 97c67652317e a736c2fdf75e
	I0304 04:19:04.957821   17343 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0304 04:19:04.963948   17343 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0304 04:19:04.967218   17343 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0304 04:19:04.967245   17343 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0304 04:19:04.970064   17343 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0304 04:19:04.970069   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:19:04.997020   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:19:05.474561   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:19:05.621341   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:19:05.648177   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0304 04:19:05.674932   17343 api_server.go:52] waiting for apiserver process to appear ...
	I0304 04:19:05.674995   17343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:19:06.177107   17343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:19:06.677057   17343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:19:06.682915   17343 api_server.go:72] duration metric: took 1.007989875s to wait for apiserver process to appear ...
	I0304 04:19:06.682926   17343 api_server.go:88] waiting for apiserver healthz status ...
	I0304 04:19:06.682939   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:11.684537   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:11.684580   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:16.684876   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:16.684927   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:21.685212   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:21.685245   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:26.685542   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:26.685625   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:31.686263   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:31.686347   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:36.687385   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:36.687471   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:41.689077   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:41.689161   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:46.690615   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:46.690685   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:51.692742   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:51.692829   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:19:56.695189   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:19:56.695310   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:01.697823   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:01.697845   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:06.700036   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:06.700280   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:06.724426   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:06.724597   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:06.742155   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:06.742248   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:06.754704   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:06.754775   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:06.765876   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:06.765959   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:06.775970   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:06.776041   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:06.786123   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:06.786202   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:06.796059   17343 logs.go:276] 0 containers: []
	W0304 04:20:06.796075   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:06.796132   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:06.807337   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:06.807355   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:06.807361   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:06.819177   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:06.819188   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:06.834727   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:06.834740   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:06.846610   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:06.846623   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:06.870272   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:06.870282   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:06.885098   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:06.885108   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:06.889192   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:06.889200   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:06.903252   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:06.903262   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:06.945597   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:06.945610   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:06.961200   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:06.961217   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:06.972497   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:06.972511   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:06.986770   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:06.986780   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:07.099948   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:07.099962   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:07.115275   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:07.115287   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:07.132515   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:07.132531   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:07.153557   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:07.153569   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:07.166416   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:07.166429   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:09.685060   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:14.685486   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:14.685713   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:14.710613   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:14.710727   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:14.726975   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:14.727067   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:14.739844   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:14.739908   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:14.751244   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:14.751328   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:14.761555   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:14.761623   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:14.772432   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:14.772501   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:14.782974   17343 logs.go:276] 0 containers: []
	W0304 04:20:14.782986   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:14.783037   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:14.794285   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:14.794308   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:14.794315   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:14.830449   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:14.830460   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:14.845216   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:14.845229   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:14.859539   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:14.859550   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:14.864378   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:14.864385   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:14.878104   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:14.878114   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:14.895126   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:14.895135   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:14.912583   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:14.912594   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:14.926662   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:14.926677   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:14.938768   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:14.938778   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:14.953414   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:14.953420   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:14.991109   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:14.991120   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:15.008097   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:15.008107   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:15.025700   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:15.025711   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:15.037067   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:15.037077   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:15.048217   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:15.048228   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:15.064311   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:15.064322   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:17.590257   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:22.592868   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:22.593033   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:22.604337   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:22.604412   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:22.615165   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:22.615238   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:22.625435   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:22.625511   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:22.636267   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:22.636352   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:22.646695   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:22.646767   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:22.657639   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:22.657708   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:22.668411   17343 logs.go:276] 0 containers: []
	W0304 04:20:22.668481   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:22.668559   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:22.679428   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:22.679445   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:22.679450   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:22.683999   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:22.684006   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:22.723605   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:22.723616   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:22.740702   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:22.740713   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:22.752595   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:22.752618   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:22.777808   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:22.777819   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:22.790553   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:22.790573   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:22.809522   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:22.809544   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:22.861492   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:22.861507   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:22.881676   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:22.881687   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:22.897697   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:22.897711   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:22.912824   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:22.912840   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:22.929513   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:22.929528   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:22.942750   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:22.942762   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:22.955111   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:22.955123   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:22.967640   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:22.967653   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:22.992035   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:22.992048   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:25.511299   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:30.513536   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:30.513696   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:30.526100   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:30.526186   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:30.541971   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:30.542069   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:30.551884   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:30.551954   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:30.562914   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:30.562991   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:30.572898   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:30.572961   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:30.586394   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:30.586460   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:30.595945   17343 logs.go:276] 0 containers: []
	W0304 04:20:30.595957   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:30.596022   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:30.609931   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:30.609947   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:30.609953   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:30.626780   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:30.626789   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:30.665614   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:30.665631   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:30.688725   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:30.688735   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:30.703836   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:30.703848   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:30.721755   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:30.721774   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:30.733924   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:30.733934   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:30.767825   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:30.767837   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:30.779964   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:30.779979   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:30.792115   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:30.792124   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:30.807752   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:30.807765   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:30.822499   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:30.822511   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:30.840764   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:30.840776   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:30.859359   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:30.859378   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:30.874254   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:30.874270   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:30.891262   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:30.891273   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:30.916491   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:30.916500   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:33.422918   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:38.425299   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:38.425780   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:38.455390   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:38.455547   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:38.473159   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:38.473253   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:38.486315   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:38.486391   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:38.498271   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:38.498345   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:38.508447   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:38.508526   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:38.522690   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:38.522769   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:38.532716   17343 logs.go:276] 0 containers: []
	W0304 04:20:38.532726   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:38.532780   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:38.548246   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:38.548271   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:38.548277   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:38.564419   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:38.564428   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:38.586806   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:38.586817   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:38.598345   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:38.598356   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:38.615855   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:38.615864   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:38.630638   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:38.630649   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:38.634870   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:38.634878   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:38.670459   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:38.670470   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:38.686062   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:38.686071   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:38.725108   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:38.725121   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:38.739758   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:38.739771   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:38.750633   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:38.750644   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:38.764928   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:38.764938   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:38.776414   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:38.776425   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:38.793587   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:38.793600   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:38.804733   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:38.804752   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:38.816254   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:38.816267   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:41.342492   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:46.344856   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:46.344983   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:46.359515   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:46.359594   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:46.374788   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:46.374854   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:46.385995   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:46.386070   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:46.396230   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:46.396310   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:46.406741   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:46.406799   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:46.417131   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:46.417186   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:46.430932   17343 logs.go:276] 0 containers: []
	W0304 04:20:46.430944   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:46.431001   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:46.441294   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:46.441310   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:46.441316   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:46.476751   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:46.476764   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:46.494388   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:46.494400   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:46.531440   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:46.531453   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:46.548908   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:46.548921   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:46.565459   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:46.565471   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:46.583313   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:46.583324   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:46.599108   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:46.599115   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:46.616244   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:46.616255   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:46.627414   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:46.627425   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:46.639074   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:46.639083   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:46.658559   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:46.658570   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:46.678397   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:46.678408   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:46.695511   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:46.695522   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:46.718824   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:46.718834   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:46.732743   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:46.732756   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:46.747383   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:46.747395   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:49.252529   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:20:54.254702   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:20:54.254866   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:20:54.275506   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:20:54.275613   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:20:54.290048   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:20:54.290124   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:20:54.310353   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:20:54.310428   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:20:54.320973   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:20:54.321044   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:20:54.331345   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:20:54.331412   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:20:54.341959   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:20:54.342036   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:20:54.352005   17343 logs.go:276] 0 containers: []
	W0304 04:20:54.352014   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:20:54.352065   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:20:54.362341   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:20:54.362359   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:20:54.362365   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:20:54.406261   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:20:54.406272   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:20:54.421026   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:20:54.421037   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:20:54.432359   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:20:54.432372   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:20:54.449906   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:20:54.449920   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:20:54.470179   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:20:54.470193   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:20:54.493198   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:20:54.493205   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:20:54.504750   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:20:54.504763   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:20:54.519573   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:20:54.519580   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:20:54.555107   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:20:54.555120   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:20:54.570991   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:20:54.571002   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:20:54.582550   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:20:54.582562   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:20:54.593784   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:20:54.593794   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:20:54.598187   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:20:54.598193   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:20:54.613111   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:20:54.613120   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:20:54.629854   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:20:54.629864   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:20:54.648377   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:20:54.648390   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:20:57.162368   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:02.164724   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:02.165080   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:02.197307   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:02.197434   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:02.214103   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:02.214183   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:02.229999   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:02.230070   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:02.241725   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:02.241797   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:02.252346   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:02.252412   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:02.265855   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:02.265923   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:02.278055   17343 logs.go:276] 0 containers: []
	W0304 04:21:02.278066   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:02.278123   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:02.289921   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:02.289942   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:02.289948   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:02.305306   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:02.305317   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:02.316407   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:02.316418   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:02.353108   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:02.353119   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:02.367440   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:02.367451   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:02.381191   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:02.381203   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:02.395823   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:02.395836   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:02.415315   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:02.415327   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:02.426801   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:02.426812   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:02.452256   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:02.452266   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:02.467204   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:02.467213   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:02.471493   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:02.471502   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:02.490918   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:02.490928   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:02.502258   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:02.502269   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:02.518327   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:02.518338   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:02.554169   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:02.554180   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:02.570205   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:02.570216   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:05.082968   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:10.084725   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:10.084948   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:10.108024   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:10.108127   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:10.123728   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:10.123817   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:10.136587   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:10.136654   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:10.148357   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:10.148444   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:10.159183   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:10.159252   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:10.169863   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:10.169939   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:10.180289   17343 logs.go:276] 0 containers: []
	W0304 04:21:10.180300   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:10.180359   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:10.190878   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:10.190894   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:10.190900   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:10.202758   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:10.202769   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:10.225981   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:10.225988   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:10.241103   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:10.241114   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:10.252344   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:10.252356   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:10.269385   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:10.269395   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:10.281458   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:10.281468   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:10.296407   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:10.296413   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:10.310874   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:10.310888   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:10.349128   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:10.349144   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:10.363874   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:10.363885   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:10.381993   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:10.382009   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:10.393245   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:10.393256   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:10.409110   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:10.409121   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:10.413920   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:10.413929   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:10.448899   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:10.448910   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:10.464101   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:10.464112   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:12.987045   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:17.989417   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:17.989631   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:18.010767   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:18.010881   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:18.024801   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:18.024879   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:18.037018   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:18.037097   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:18.047836   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:18.047906   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:18.061756   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:18.061821   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:18.072307   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:18.072374   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:18.082167   17343 logs.go:276] 0 containers: []
	W0304 04:21:18.082179   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:18.082249   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:18.093471   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:18.093487   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:18.093492   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:18.107875   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:18.107889   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:18.124337   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:18.124348   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:18.136335   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:18.136346   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:18.153629   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:18.153639   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:18.164786   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:18.164797   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:18.180634   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:18.180644   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:18.215949   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:18.215964   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:18.230612   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:18.230623   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:18.269766   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:18.269778   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:18.281304   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:18.281316   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:18.303966   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:18.303974   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:18.307877   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:18.307883   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:18.319028   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:18.319043   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:18.333547   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:18.333557   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:18.344773   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:18.344782   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:18.361494   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:18.361510   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:20.881488   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:25.883517   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:25.883747   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:25.905915   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:25.906024   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:25.919531   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:25.919610   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:25.931633   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:25.931695   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:25.946022   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:25.946105   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:25.959211   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:25.959285   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:25.970168   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:25.970238   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:25.980688   17343 logs.go:276] 0 containers: []
	W0304 04:21:25.980701   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:25.980756   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:25.997517   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:25.997534   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:25.997540   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:26.011650   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:26.011665   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:26.055201   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:26.055212   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:26.067104   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:26.067115   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:26.082214   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:26.082224   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:26.097882   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:26.097890   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:26.111829   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:26.111840   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:26.123691   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:26.123703   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:26.140738   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:26.140748   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:26.153073   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:26.153087   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:26.164863   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:26.164873   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:26.188751   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:26.188759   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:26.203699   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:26.203709   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:26.224072   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:26.224081   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:26.236278   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:26.236288   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:26.240838   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:26.240847   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:26.274561   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:26.274571   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:28.791279   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:33.793544   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:33.793709   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:33.813393   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:33.813501   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:33.827880   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:33.827954   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:33.842333   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:33.842401   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:33.853508   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:33.853591   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:33.864399   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:33.864468   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:33.875238   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:33.875305   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:33.885927   17343 logs.go:276] 0 containers: []
	W0304 04:21:33.885939   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:33.885996   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:33.896548   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:33.896564   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:33.896570   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:33.913364   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:33.913377   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:33.928665   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:33.928676   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:33.933212   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:33.933223   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:33.972527   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:33.972539   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:33.987399   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:33.987410   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:33.998526   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:33.998536   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:34.034144   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:34.034155   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:34.046106   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:34.046120   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:34.058291   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:34.058303   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:34.071202   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:34.071212   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:34.086195   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:34.086206   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:34.111723   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:34.111735   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:34.123657   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:34.123670   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:34.138660   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:34.138668   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:34.152588   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:34.152598   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:34.170966   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:34.170976   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:36.697388   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:41.699636   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:41.699855   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:41.731911   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:41.732003   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:41.747310   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:41.747388   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:41.759563   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:41.759635   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:41.770138   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:41.770210   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:41.780204   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:41.780283   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:41.790891   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:41.790967   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:41.809830   17343 logs.go:276] 0 containers: []
	W0304 04:21:41.809842   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:41.809900   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:41.820305   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:41.820325   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:41.820331   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:41.855578   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:41.855590   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:41.867332   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:41.867347   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:41.878862   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:41.878875   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:41.883497   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:41.883504   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:41.920560   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:41.920571   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:41.938449   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:41.938464   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:41.953944   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:41.953954   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:41.965531   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:41.965540   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:41.988266   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:41.988281   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:42.003278   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:42.003290   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:42.025693   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:42.025704   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:42.040008   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:42.040021   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:42.050858   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:42.050871   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:42.070011   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:42.070025   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:42.088279   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:42.088290   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:42.103198   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:42.103208   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:44.617407   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:49.620073   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:49.620273   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:49.646570   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:49.646700   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:49.663445   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:49.663527   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:49.679726   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:49.679801   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:49.691790   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:49.691865   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:49.703770   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:49.703841   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:49.719942   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:49.720015   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:49.734343   17343 logs.go:276] 0 containers: []
	W0304 04:21:49.734355   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:49.734416   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:49.745175   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:49.745192   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:49.745197   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:49.762328   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:49.762341   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:49.776478   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:49.776491   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:49.797783   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:49.797798   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:49.812842   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:49.812853   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:49.823718   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:49.823732   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:49.828361   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:49.828367   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:49.862586   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:49.862601   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:49.899702   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:49.899711   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:49.923964   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:49.923975   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:49.935666   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:49.935676   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:21:49.949182   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:49.949192   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:49.960557   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:49.960566   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:49.982999   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:49.983008   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:49.994618   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:49.994630   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:50.009209   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:50.009215   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:50.023537   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:50.023548   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:52.543170   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:21:57.545910   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:21:57.546217   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:21:57.580374   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:21:57.580506   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:21:57.598884   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:21:57.598975   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:21:57.615837   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:21:57.615914   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:21:57.627486   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:21:57.627558   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:21:57.637739   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:21:57.637811   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:21:57.648631   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:21:57.648703   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:21:57.658964   17343 logs.go:276] 0 containers: []
	W0304 04:21:57.658977   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:21:57.659034   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:21:57.669655   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:21:57.669672   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:21:57.669678   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:21:57.685001   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:21:57.685012   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:21:57.696943   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:21:57.696957   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:21:57.708618   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:21:57.708628   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:21:57.724432   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:21:57.724446   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:21:57.741878   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:21:57.741895   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:21:57.754010   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:21:57.754023   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:21:57.772134   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:21:57.772147   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:21:57.788497   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:21:57.788508   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:21:57.825168   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:21:57.825180   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:21:57.862511   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:21:57.862522   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:21:57.878412   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:21:57.878425   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:21:57.893063   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:21:57.893073   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:21:57.908993   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:21:57.909007   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:21:57.920508   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:21:57.920520   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:21:57.944008   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:21:57.944017   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:21:57.948451   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:21:57.948458   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:00.464743   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:05.467160   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:05.467343   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:05.485252   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:05.485339   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:05.498636   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:05.498705   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:05.510143   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:05.510212   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:05.522069   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:05.522134   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:05.532277   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:05.532338   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:05.542884   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:05.542953   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:05.553260   17343 logs.go:276] 0 containers: []
	W0304 04:22:05.553276   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:05.553339   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:05.571337   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:05.571356   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:05.571364   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:05.575976   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:05.575985   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:05.611025   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:05.611036   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:05.624668   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:05.624679   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:05.638909   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:05.638921   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:05.661297   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:05.661307   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:05.703332   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:05.703343   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:05.718385   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:05.718397   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:05.731008   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:05.731019   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:05.742479   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:05.742491   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:05.759241   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:05.759252   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:05.774269   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:05.774282   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:05.785926   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:05.785938   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:05.801159   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:05.801172   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:05.813323   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:05.813335   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:05.827880   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:05.827888   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:05.841344   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:05.841356   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:08.358735   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:13.361120   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:13.361444   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:13.389767   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:13.389885   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:13.409139   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:13.409217   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:13.422641   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:13.422716   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:13.434518   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:13.434574   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:13.445300   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:13.445363   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:13.455872   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:13.455944   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:13.466172   17343 logs.go:276] 0 containers: []
	W0304 04:22:13.466182   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:13.466241   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:13.481826   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:13.481844   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:13.481850   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:13.517237   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:13.517248   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:13.531691   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:13.531703   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:13.569927   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:13.569938   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:13.592496   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:13.592503   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:13.596872   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:13.596878   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:13.612819   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:13.612829   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:13.625114   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:13.625122   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:13.642471   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:13.642481   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:13.658229   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:13.658241   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:13.669942   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:13.669956   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:13.684603   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:13.684610   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:13.699843   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:13.699857   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:13.711560   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:13.711571   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:13.727779   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:13.727791   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:13.747226   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:13.747237   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:13.758277   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:13.758287   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:16.278356   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:21.280588   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:21.280704   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:21.292164   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:21.292236   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:21.302325   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:21.302394   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:21.312925   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:21.312985   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:21.324213   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:21.324286   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:21.342097   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:21.342169   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:21.352119   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:21.352193   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:21.362685   17343 logs.go:276] 0 containers: []
	W0304 04:22:21.362697   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:21.362761   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:21.374062   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:21.374079   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:21.374085   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:21.388932   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:21.388940   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:21.402899   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:21.402910   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:21.417179   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:21.417189   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:21.433830   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:21.433840   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:21.449051   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:21.449060   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:21.460406   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:21.460418   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:21.499803   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:21.499816   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:21.539517   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:21.539527   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:21.551527   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:21.551538   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:21.571209   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:21.571219   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:21.582569   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:21.582579   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:21.586800   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:21.586806   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:21.597876   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:21.597888   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:21.614916   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:21.614927   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:21.626799   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:21.626809   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:21.640517   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:21.640527   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:24.166176   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:29.168449   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:29.168543   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:29.201845   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:29.201931   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:29.214375   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:29.214455   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:29.226758   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:29.226827   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:29.237924   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:29.237991   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:29.256724   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:29.256793   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:29.267134   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:29.267197   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:29.277295   17343 logs.go:276] 0 containers: []
	W0304 04:22:29.277313   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:29.277366   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:29.288312   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:29.288330   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:29.288335   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:29.306046   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:29.306057   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:29.317271   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:29.317283   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:29.340049   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:29.340056   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:29.355359   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:29.355365   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:29.369936   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:29.369946   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:29.416539   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:29.416550   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:29.431161   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:29.431171   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:29.442757   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:29.442772   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:29.457804   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:29.457814   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:29.473083   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:29.473094   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:29.477261   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:29.477266   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:29.494730   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:29.494742   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:29.506419   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:29.506429   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:29.542935   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:29.542945   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:29.557198   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:29.557209   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:29.571588   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:29.571597   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:32.085214   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:37.085964   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:37.086108   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:37.099945   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:37.100022   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:37.110738   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:37.110806   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:37.121156   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:37.121231   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:37.131833   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:37.131915   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:37.142101   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:37.142169   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:37.153015   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:37.153092   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:37.163201   17343 logs.go:276] 0 containers: []
	W0304 04:22:37.163213   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:37.163276   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:37.174197   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:37.174214   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:37.174219   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:37.196536   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:37.196544   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:37.211589   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:37.211597   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:37.226022   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:37.226033   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:37.268945   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:37.268958   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:37.283471   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:37.283485   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:37.302011   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:37.302021   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:37.313546   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:37.313558   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:37.328493   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:37.328504   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:37.340907   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:37.340919   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:37.356584   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:37.356596   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:37.367682   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:37.367692   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:37.372425   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:37.372433   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:37.389627   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:37.389638   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:37.401123   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:37.401134   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:37.437807   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:37.437823   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:37.454768   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:37.454780   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:39.975315   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:44.977846   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:44.978017   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:45.005024   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:45.005126   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:45.018339   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:45.018424   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:45.030125   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:45.030194   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:45.040777   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:45.040849   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:45.051315   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:45.051385   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:45.063675   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:45.063748   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:45.073398   17343 logs.go:276] 0 containers: []
	W0304 04:22:45.073410   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:45.073470   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:45.084285   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:45.084303   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:45.084308   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:45.100814   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:45.100824   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:45.118896   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:45.118906   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:45.154728   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:45.154744   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:45.170080   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:45.170094   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:45.182219   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:45.182230   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:45.196711   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:45.196721   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:45.210662   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:45.210678   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:45.232958   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:45.232969   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:45.244934   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:45.244946   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:45.260610   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:45.260621   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:45.265038   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:45.265044   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:45.283282   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:45.283294   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:45.322527   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:45.322548   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:45.342834   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:45.342846   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:45.354262   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:45.354274   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:45.369801   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:45.369813   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:47.885564   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:22:52.888293   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:22:52.888621   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:22:52.929114   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:22:52.929243   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:22:52.946545   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:22:52.946628   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:22:52.959259   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:22:52.959334   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:22:52.970192   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:22:52.970262   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:22:52.980805   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:22:52.980878   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:22:52.991216   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:22:52.991290   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:22:53.001721   17343 logs.go:276] 0 containers: []
	W0304 04:22:53.001733   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:22:53.001791   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:22:53.012095   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:22:53.012111   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:22:53.012117   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:22:53.023553   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:22:53.023565   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:22:53.059339   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:22:53.059351   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:22:53.073696   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:22:53.073706   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:22:53.089014   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:22:53.089027   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:22:53.129956   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:22:53.129967   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:22:53.159322   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:22:53.159332   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:22:53.180483   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:22:53.180493   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:22:53.195524   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:22:53.195535   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:22:53.207340   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:22:53.207352   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:22:53.218750   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:22:53.218760   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:22:53.230070   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:22:53.230082   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:22:53.245169   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:22:53.245176   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:22:53.249066   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:22:53.249072   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:22:53.270076   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:22:53.270084   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:22:53.283079   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:22:53.283089   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:22:53.324820   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:22:53.324844   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:22:55.851542   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:00.852136   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:00.852355   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:23:00.882159   17343 logs.go:276] 2 containers: [7f2681dd8e6a a8a74fac7389]
	I0304 04:23:00.882277   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:23:00.900399   17343 logs.go:276] 2 containers: [245404c9b0c4 0c27c99061a8]
	I0304 04:23:00.900490   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:23:00.913901   17343 logs.go:276] 1 containers: [6099f9453164]
	I0304 04:23:00.913974   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:23:00.925299   17343 logs.go:276] 2 containers: [33b9083f8c1f 331d1cec5665]
	I0304 04:23:00.925372   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:23:00.936669   17343 logs.go:276] 1 containers: [00f9c2dde850]
	I0304 04:23:00.936736   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:23:00.947878   17343 logs.go:276] 2 containers: [5d91f653fd1f 68d9e42070f0]
	I0304 04:23:00.947948   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:23:00.958223   17343 logs.go:276] 0 containers: []
	W0304 04:23:00.958235   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:23:00.958301   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:23:00.968514   17343 logs.go:276] 2 containers: [891252a59e28 fa70dd3bbae7]
	I0304 04:23:00.968533   17343 logs.go:123] Gathering logs for etcd [245404c9b0c4] ...
	I0304 04:23:00.968540   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 245404c9b0c4"
	I0304 04:23:00.982321   17343 logs.go:123] Gathering logs for kube-scheduler [33b9083f8c1f] ...
	I0304 04:23:00.982334   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33b9083f8c1f"
	I0304 04:23:00.998762   17343 logs.go:123] Gathering logs for kube-scheduler [331d1cec5665] ...
	I0304 04:23:00.998771   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 331d1cec5665"
	I0304 04:23:01.013449   17343 logs.go:123] Gathering logs for kube-proxy [00f9c2dde850] ...
	I0304 04:23:01.013461   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00f9c2dde850"
	I0304 04:23:01.025155   17343 logs.go:123] Gathering logs for kube-apiserver [7f2681dd8e6a] ...
	I0304 04:23:01.025166   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2681dd8e6a"
	I0304 04:23:01.040512   17343 logs.go:123] Gathering logs for etcd [0c27c99061a8] ...
	I0304 04:23:01.040523   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c27c99061a8"
	I0304 04:23:01.055631   17343 logs.go:123] Gathering logs for coredns [6099f9453164] ...
	I0304 04:23:01.055641   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6099f9453164"
	I0304 04:23:01.068670   17343 logs.go:123] Gathering logs for kube-controller-manager [68d9e42070f0] ...
	I0304 04:23:01.068684   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68d9e42070f0"
	I0304 04:23:01.084340   17343 logs.go:123] Gathering logs for storage-provisioner [fa70dd3bbae7] ...
	I0304 04:23:01.084353   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa70dd3bbae7"
	I0304 04:23:01.096009   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:23:01.096021   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:23:01.100579   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:23:01.100588   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:23:01.123493   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:23:01.123503   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:23:01.161073   17343 logs.go:123] Gathering logs for kube-apiserver [a8a74fac7389] ...
	I0304 04:23:01.161086   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8a74fac7389"
	I0304 04:23:01.198435   17343 logs.go:123] Gathering logs for kube-controller-manager [5d91f653fd1f] ...
	I0304 04:23:01.198445   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d91f653fd1f"
	I0304 04:23:01.215956   17343 logs.go:123] Gathering logs for storage-provisioner [891252a59e28] ...
	I0304 04:23:01.215965   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 891252a59e28"
	I0304 04:23:01.254062   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:23:01.254074   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:23:01.273281   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:23:01.273296   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:23:03.789882   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:08.792353   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:08.792525   17343 kubeadm.go:640] restartCluster took 4m3.866255458s
	W0304 04:23:08.792657   17343 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I0304 04:23:08.792723   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0304 04:23:09.822886   17343 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.030149208s)
	I0304 04:23:09.822949   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0304 04:23:09.828673   17343 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0304 04:23:09.831549   17343 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0304 04:23:09.834545   17343 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0304 04:23:09.834559   17343 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0304 04:23:09.853566   17343 kubeadm.go:322] [init] Using Kubernetes version: v1.24.1
	I0304 04:23:09.853642   17343 kubeadm.go:322] [preflight] Running pre-flight checks
	I0304 04:23:09.906393   17343 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0304 04:23:09.906443   17343 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0304 04:23:09.906488   17343 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0304 04:23:09.957297   17343 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0304 04:23:09.965452   17343 out.go:204]   - Generating certificates and keys ...
	I0304 04:23:09.965485   17343 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0304 04:23:09.965514   17343 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0304 04:23:09.965553   17343 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0304 04:23:09.965585   17343 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0304 04:23:09.965625   17343 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0304 04:23:09.965660   17343 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0304 04:23:09.965697   17343 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0304 04:23:09.965730   17343 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0304 04:23:09.965808   17343 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0304 04:23:09.965865   17343 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0304 04:23:09.965887   17343 kubeadm.go:322] [certs] Using the existing "sa" key
	I0304 04:23:09.965920   17343 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0304 04:23:10.019764   17343 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0304 04:23:10.265216   17343 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0304 04:23:10.428973   17343 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0304 04:23:10.496883   17343 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0304 04:23:10.529316   17343 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0304 04:23:10.529755   17343 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0304 04:23:10.529794   17343 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0304 04:23:10.602116   17343 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0304 04:23:10.606374   17343 out.go:204]   - Booting up control plane ...
	I0304 04:23:10.606424   17343 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0304 04:23:10.606485   17343 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0304 04:23:10.606530   17343 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0304 04:23:10.606579   17343 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0304 04:23:10.606690   17343 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0304 04:23:15.105076   17343 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.502226 seconds
	I0304 04:23:15.105139   17343 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0304 04:23:15.109461   17343 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0304 04:23:15.616924   17343 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0304 04:23:15.617034   17343 kubeadm.go:322] [mark-control-plane] Marking the node stopped-upgrade-289000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0304 04:23:16.122739   17343 kubeadm.go:322] [bootstrap-token] Using token: javfic.twzpj02lkxs7rthh
	I0304 04:23:16.126926   17343 out.go:204]   - Configuring RBAC rules ...
	I0304 04:23:16.126991   17343 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0304 04:23:16.135625   17343 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0304 04:23:16.138589   17343 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0304 04:23:16.139779   17343 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0304 04:23:16.140852   17343 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0304 04:23:16.141987   17343 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0304 04:23:16.145929   17343 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0304 04:23:16.316781   17343 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0304 04:23:16.538919   17343 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0304 04:23:16.539604   17343 kubeadm.go:322] 
	I0304 04:23:16.539637   17343 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0304 04:23:16.539641   17343 kubeadm.go:322] 
	I0304 04:23:16.539689   17343 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0304 04:23:16.539694   17343 kubeadm.go:322] 
	I0304 04:23:16.539712   17343 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0304 04:23:16.539753   17343 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0304 04:23:16.539788   17343 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0304 04:23:16.539791   17343 kubeadm.go:322] 
	I0304 04:23:16.539818   17343 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0304 04:23:16.539822   17343 kubeadm.go:322] 
	I0304 04:23:16.539850   17343 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0304 04:23:16.539854   17343 kubeadm.go:322] 
	I0304 04:23:16.539879   17343 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0304 04:23:16.539918   17343 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0304 04:23:16.539962   17343 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0304 04:23:16.539965   17343 kubeadm.go:322] 
	I0304 04:23:16.540013   17343 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0304 04:23:16.540053   17343 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0304 04:23:16.540057   17343 kubeadm.go:322] 
	I0304 04:23:16.540112   17343 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token javfic.twzpj02lkxs7rthh \
	I0304 04:23:16.540166   17343 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d9011201d4995caae6ff8661400631de0c6362c7df9a896efc3c38706beefef \
	I0304 04:23:16.540184   17343 kubeadm.go:322] 	--control-plane 
	I0304 04:23:16.540187   17343 kubeadm.go:322] 
	I0304 04:23:16.540230   17343 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0304 04:23:16.540233   17343 kubeadm.go:322] 
	I0304 04:23:16.540279   17343 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token javfic.twzpj02lkxs7rthh \
	I0304 04:23:16.540330   17343 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d9011201d4995caae6ff8661400631de0c6362c7df9a896efc3c38706beefef 
	I0304 04:23:16.540436   17343 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0304 04:23:16.540501   17343 cni.go:84] Creating CNI manager for ""
	I0304 04:23:16.540510   17343 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:23:16.543338   17343 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0304 04:23:16.551294   17343 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0304 04:23:16.554432   17343 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0304 04:23:16.559121   17343 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0304 04:23:16.559164   17343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0304 04:23:16.559180   17343 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ab57ba9f65fd4cb3ac8815e4f9baeeca5604e645 minikube.k8s.io/name=stopped-upgrade-289000 minikube.k8s.io/updated_at=2024_03_04T04_23_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0304 04:23:16.602469   17343 kubeadm.go:1088] duration metric: took 43.340458ms to wait for elevateKubeSystemPrivileges.
	I0304 04:23:16.602477   17343 ops.go:34] apiserver oom_adj: -16
	I0304 04:23:16.602492   17343 host.go:66] Checking if "stopped-upgrade-289000" exists ...
	I0304 04:23:16.603205   17343 main.go:141] libmachine: Using SSH client type: external
	I0304 04:23:16.603223   17343 main.go:141] libmachine: Using SSH private key: /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa (-rw-------)
	I0304 04:23:16.603239   17343 main.go:141] libmachine: &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa -p 52757] /usr/bin/ssh <nil>}
	I0304 04:23:16.603250   17343 main.go:141] libmachine: /usr/bin/ssh -F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@localhost -o IdentitiesOnly=yes -i /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa -p 52757 -f -NTL 52792:localhost:8443
	I0304 04:23:16.647800   17343 kubeadm.go:406] StartCluster complete in 4m11.780622708s
	I0304 04:23:16.647851   17343 settings.go:142] acquiring lock: {Name:mk5ed2e5b4fa3bf37e56838441d7d3c0b1b72b0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:23:16.647948   17343 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:23:16.648527   17343 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/kubeconfig: {Name:mkd9e78edd5ce89511d1f03c76ad35ee3697edbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:23:16.648729   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0304 04:23:16.648814   17343 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0304 04:23:16.648860   17343 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:23:16.648869   17343 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-289000"
	I0304 04:23:16.648880   17343 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-289000"
	W0304 04:23:16.648883   17343 addons.go:243] addon storage-provisioner should already be in state true
	I0304 04:23:16.648914   17343 host.go:66] Checking if "stopped-upgrade-289000" exists ...
	I0304 04:23:16.648923   17343 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-289000"
	I0304 04:23:16.648929   17343 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-289000"
	I0304 04:23:16.649048   17343 kapi.go:59] client config for stopped-upgrade-289000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.key", CAFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10254f7d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0304 04:23:16.649988   17343 kapi.go:59] client config for stopped-upgrade-289000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/stopped-upgrade-289000/client.key", CAFile:"/Users/jenkins/minikube-integration/18284-15061/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10254f7d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0304 04:23:16.650100   17343 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-289000"
	W0304 04:23:16.650105   17343 addons.go:243] addon default-storageclass should already be in state true
	I0304 04:23:16.650112   17343 host.go:66] Checking if "stopped-upgrade-289000" exists ...
	I0304 04:23:16.654265   17343 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0304 04:23:16.658114   17343 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0304 04:23:16.658121   17343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0304 04:23:16.658130   17343 sshutil.go:53] new ssh client: &{IP:localhost Port:52757 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa Username:docker}
	I0304 04:23:16.658770   17343 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0304 04:23:16.658777   17343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0304 04:23:16.658782   17343 sshutil.go:53] new ssh client: &{IP:localhost Port:52757 SSHKeyPath:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/stopped-upgrade-289000/id_rsa Username:docker}
	I0304 04:23:16.680448   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           10.0.2.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0304 04:23:16.719210   17343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0304 04:23:16.731348   17343 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0304 04:23:17.143699   17343 start.go:929] {"host.minikube.internal": 10.0.2.2} host record injected into CoreDNS's ConfigMap
	W0304 04:23:46.651266   17343 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "stopped-upgrade-289000" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	E0304 04:23:46.651283   17343 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://10.0.2.15:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 10.0.2.15:8443: i/o timeout
	I0304 04:23:46.651294   17343 start.go:223] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:23:46.655537   17343 out.go:177] * Verifying Kubernetes components...
	I0304 04:23:46.662461   17343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0304 04:23:46.668205   17343 api_server.go:52] waiting for apiserver process to appear ...
	I0304 04:23:46.668251   17343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0304 04:23:46.672859   17343 api_server.go:72] duration metric: took 21.551541ms to wait for apiserver process to appear ...
	I0304 04:23:46.672867   17343 api_server.go:88] waiting for apiserver healthz status ...
	I0304 04:23:46.672877   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0304 04:23:47.145823   17343 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0304 04:23:47.150323   17343 out.go:177] * Enabled addons: storage-provisioner
	I0304 04:23:47.157149   17343 addons.go:505] enable addons completed in 30.50853325s: enabled=[storage-provisioner]
	I0304 04:23:51.674983   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:51.675020   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:23:56.675318   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:23:56.675345   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:01.675661   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:01.675726   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:06.676589   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:06.676655   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:11.677416   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:11.677475   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:16.678522   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:16.678538   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:21.680278   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:21.680390   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:26.682394   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:26.682432   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:31.684637   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:31.684715   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:36.687145   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:36.687193   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:41.689443   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:41.689466   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:46.691249   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:46.691482   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:24:46.710896   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:24:46.710996   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:24:46.725608   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:24:46.725685   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:24:46.739453   17343 logs.go:276] 2 containers: [52c78c839fc7 97f2e9ac37d2]
	I0304 04:24:46.739520   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:24:46.749983   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:24:46.750050   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:24:46.760662   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:24:46.760749   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:24:46.770861   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:24:46.770930   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:24:46.781035   17343 logs.go:276] 0 containers: []
	W0304 04:24:46.781046   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:24:46.781103   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:24:46.791983   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:24:46.791997   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:24:46.792002   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:24:46.807201   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:24:46.807214   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:24:46.825596   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:24:46.825607   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:24:46.836941   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:24:46.836954   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:24:46.841581   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:24:46.841590   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:24:46.877127   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:24:46.877138   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:24:46.891955   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:24:46.891966   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:24:46.903977   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:24:46.903987   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:24:46.923606   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:24:46.923618   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:24:46.935926   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:24:46.935943   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:24:46.959560   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:24:46.959592   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:24:46.973255   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:24:46.973270   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:24:47.007101   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:24:47.007115   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:24:49.523763   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:24:54.526065   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:24:54.526237   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:24:54.545559   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:24:54.545646   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:24:54.559663   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:24:54.559738   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:24:54.571925   17343 logs.go:276] 2 containers: [52c78c839fc7 97f2e9ac37d2]
	I0304 04:24:54.571990   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:24:54.582708   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:24:54.582777   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:24:54.592955   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:24:54.593034   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:24:54.603441   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:24:54.603508   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:24:54.613539   17343 logs.go:276] 0 containers: []
	W0304 04:24:54.613550   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:24:54.613606   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:24:54.624551   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:24:54.624564   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:24:54.624570   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:24:54.642430   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:24:54.642442   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:24:54.653558   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:24:54.653568   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:24:54.687370   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:24:54.687379   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:24:54.722379   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:24:54.722390   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:24:54.737537   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:24:54.737549   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:24:54.749364   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:24:54.749375   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:24:54.761173   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:24:54.761184   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:24:54.772463   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:24:54.772476   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:24:54.776745   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:24:54.776753   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:24:54.790685   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:24:54.790696   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:24:54.803362   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:24:54.803374   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:24:54.823694   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:24:54.823707   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:24:57.348512   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:25:02.350734   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:25:02.350923   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:25:02.368063   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:25:02.368164   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:25:02.381398   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:25:02.381476   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:25:02.392989   17343 logs.go:276] 2 containers: [52c78c839fc7 97f2e9ac37d2]
	I0304 04:25:02.393077   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:25:02.405507   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:25:02.405578   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:25:02.415960   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:25:02.416029   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:25:02.426259   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:25:02.426327   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:25:02.436724   17343 logs.go:276] 0 containers: []
	W0304 04:25:02.436736   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:25:02.436794   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:25:02.447252   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:25:02.447267   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:25:02.447272   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:25:02.461957   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:25:02.461967   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:25:02.475541   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:25:02.475551   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:25:02.491407   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:25:02.491418   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:25:02.503607   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:25:02.503618   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:25:02.522012   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:25:02.522025   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:25:02.533919   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:25:02.533930   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:25:02.566015   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:25:02.566023   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:25:02.601607   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:25:02.601618   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:25:02.616381   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:25:02.616392   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:25:02.628199   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:25:02.628210   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:25:02.652882   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:25:02.652894   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:25:02.664533   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:25:02.664548   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:25:05.170151   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:25:10.172424   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:25:10.172595   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:25:10.191072   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:25:10.191172   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:25:10.204329   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:25:10.204396   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:25:10.215897   17343 logs.go:276] 2 containers: [52c78c839fc7 97f2e9ac37d2]
	I0304 04:25:10.215959   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:25:10.231327   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:25:10.231408   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:25:10.242508   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:25:10.242587   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:25:10.252384   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:25:10.252451   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:25:10.262242   17343 logs.go:276] 0 containers: []
	W0304 04:25:10.262253   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:25:10.262316   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:25:10.272971   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:25:10.272985   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:25:10.272991   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:25:10.284574   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:25:10.284587   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:25:10.309001   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:25:10.309010   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:25:10.343361   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:25:10.343369   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:25:10.357708   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:25:10.357720   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:25:10.369740   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:25:10.369751   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:25:10.384356   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:25:10.384367   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:25:10.403928   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:25:10.403938   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:25:10.421279   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:25:10.421290   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:25:10.433435   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:25:10.433445   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:25:10.445068   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:25:10.445080   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:25:10.449539   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:25:10.449544   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:25:10.483427   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:25:10.483438   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:25:12.997725   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:25:18.000064   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:25:18.000464   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:25:18.040629   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:25:18.040761   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:25:18.062923   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:25:18.063052   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:25:18.078067   17343 logs.go:276] 2 containers: [52c78c839fc7 97f2e9ac37d2]
	I0304 04:25:18.078144   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:25:18.102615   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:25:18.102693   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:25:18.116113   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:25:18.116184   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:25:18.126506   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:25:18.126572   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:25:18.136846   17343 logs.go:276] 0 containers: []
	W0304 04:25:18.136859   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:25:18.136916   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:25:18.148471   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:25:18.148486   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:25:18.148491   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:25:18.167117   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:25:18.167130   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:25:18.189723   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:25:18.189731   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:25:18.211800   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:25:18.211811   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:25:18.223353   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:25:18.223362   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:25:18.258316   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:25:18.258328   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:25:18.272718   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:25:18.272730   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:25:18.284658   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:25:18.284672   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:25:18.299341   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:25:18.299353   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:25:18.310887   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:25:18.310900   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:25:18.334535   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:25:18.334545   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:25:18.366873   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:25:18.366886   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:25:18.371050   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:25:18.371057   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:25:20.886871   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:25:25.888492   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:25:25.888875   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:25:25.929415   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:25:25.929545   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:25:25.951637   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:25:25.951743   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:25:25.967358   17343 logs.go:276] 2 containers: [52c78c839fc7 97f2e9ac37d2]
	I0304 04:25:25.967435   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:25:25.979517   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:25:25.979583   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:25:25.990187   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:25:25.990249   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:25:26.000327   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:25:26.000396   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:25:26.010473   17343 logs.go:276] 0 containers: []
	W0304 04:25:26.010487   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:25:26.010547   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:25:26.020852   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:25:26.020866   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:25:26.020871   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:25:26.032548   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:25:26.032565   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:25:26.066906   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:25:26.066917   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:25:26.081847   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:25:26.081860   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:25:26.095546   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:25:26.095559   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:25:26.107280   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:25:26.107293   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:25:26.121819   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:25:26.121832   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:25:26.144510   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:25:26.144517   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:25:26.175795   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:25:26.175811   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:25:26.180349   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:25:26.180358   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:25:26.191921   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:25:26.191935   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:25:26.207850   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:25:26.207861   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:25:26.224816   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:25:26.224828   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:25:28.742419   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:25:33.745094   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:25:33.745478   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:25:33.781964   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:25:33.782114   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:25:33.802948   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:25:33.803101   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:25:33.818532   17343 logs.go:276] 4 containers: [425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:25:33.818608   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:25:33.831649   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:25:33.831726   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:25:33.842412   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:25:33.842477   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:25:33.853468   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:25:33.853540   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:25:33.867758   17343 logs.go:276] 0 containers: []
	W0304 04:25:33.867770   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:25:33.867835   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:25:33.882807   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:25:33.882822   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:25:33.882828   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:25:33.894000   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:25:33.894012   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:25:33.911429   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:25:33.911439   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:25:33.936079   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:25:33.936086   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:25:33.947560   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:25:33.947571   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:25:33.980179   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:25:33.980186   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:25:33.984441   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:25:33.984450   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:25:33.996323   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:25:33.996333   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:25:34.007696   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:25:34.007705   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:25:34.019079   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:25:34.019090   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:25:34.034221   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:25:34.034234   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:25:34.048199   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:25:34.048208   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:25:34.059208   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:25:34.059223   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:25:34.073269   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:25:34.073279   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:25:34.107892   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:25:34.107904   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:25:36.622231   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:25:41.624753   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:25:41.624875   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:25:41.651603   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:25:41.651679   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:25:41.662403   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:25:41.662475   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:25:41.673174   17343 logs.go:276] 4 containers: [425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:25:41.673243   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:25:41.684171   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:25:41.684234   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:25:41.694596   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:25:41.694665   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:25:41.705609   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:25:41.705668   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:25:41.720810   17343 logs.go:276] 0 containers: []
	W0304 04:25:41.720821   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:25:41.720879   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:25:41.735349   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:25:41.735364   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:25:41.735369   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:25:41.746608   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:25:41.746622   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:25:41.764457   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:25:41.764467   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:25:41.798114   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:25:41.798123   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:25:41.812287   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:25:41.812297   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:25:41.826454   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:25:41.826463   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:25:41.860008   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:25:41.860019   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:25:41.872741   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:25:41.872755   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:25:41.884789   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:25:41.884801   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:25:41.896273   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:25:41.896284   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:25:41.908346   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:25:41.908358   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:25:41.920827   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:25:41.920838   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:25:41.945241   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:25:41.945255   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:25:41.949749   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:25:41.949755   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:25:41.963344   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:25:41.963353   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:25:44.479444   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:25:49.481827   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:25:49.482283   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:25:49.522170   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:25:49.522305   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:25:49.542628   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:25:49.542726   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:25:49.559743   17343 logs.go:276] 4 containers: [425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:25:49.559823   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:25:49.571746   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:25:49.571810   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:25:49.582472   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:25:49.582543   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:25:49.593216   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:25:49.593281   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:25:49.603613   17343 logs.go:276] 0 containers: []
	W0304 04:25:49.603625   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:25:49.603683   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:25:49.614305   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:25:49.614320   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:25:49.614325   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:25:49.631153   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:25:49.631163   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:25:49.642950   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:25:49.642963   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:25:49.654762   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:25:49.654775   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:25:49.670090   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:25:49.670102   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:25:49.681987   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:25:49.681997   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:25:49.713957   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:25:49.713966   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:25:49.748368   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:25:49.748381   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:25:49.760510   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:25:49.760523   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:25:49.774818   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:25:49.774830   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:25:49.799515   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:25:49.799527   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:25:49.803742   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:25:49.803750   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:25:49.817376   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:25:49.817388   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:25:49.829789   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:25:49.829801   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:25:49.844173   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:25:49.844182   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:25:52.362214   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:25:57.363688   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:25:57.364042   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:25:57.402169   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:25:57.402305   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:25:57.423232   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:25:57.423339   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:25:57.440345   17343 logs.go:276] 4 containers: [425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:25:57.440421   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:25:57.452609   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:25:57.452667   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:25:57.464526   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:25:57.464582   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:25:57.475031   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:25:57.475099   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:25:57.484754   17343 logs.go:276] 0 containers: []
	W0304 04:25:57.484767   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:25:57.484826   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:25:57.495294   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:25:57.495312   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:25:57.495317   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:25:57.507015   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:25:57.507023   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:25:57.519244   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:25:57.519252   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:25:57.543935   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:25:57.543945   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:25:57.555874   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:25:57.555887   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:25:57.569682   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:25:57.569694   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:25:57.585718   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:25:57.585730   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:25:57.597288   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:25:57.597299   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:25:57.614630   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:25:57.614640   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:25:57.626053   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:25:57.626067   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:25:57.630147   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:25:57.630153   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:25:57.664045   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:25:57.664058   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:25:57.697994   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:25:57.698004   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:25:57.710150   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:25:57.710163   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:25:57.724397   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:25:57.724407   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:26:00.237250   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:26:05.238124   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:26:05.238423   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:26:05.267906   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:26:05.268029   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:26:05.285874   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:26:05.285961   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:26:05.299331   17343 logs.go:276] 4 containers: [425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:26:05.299407   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:26:05.311251   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:26:05.311325   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:26:05.321471   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:26:05.321537   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:26:05.332093   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:26:05.332164   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:26:05.341788   17343 logs.go:276] 0 containers: []
	W0304 04:26:05.341798   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:26:05.341851   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:26:05.356938   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:26:05.356954   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:26:05.356959   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:26:05.368248   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:26:05.368260   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:26:05.379781   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:26:05.379794   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:26:05.411700   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:26:05.411709   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:26:05.415604   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:26:05.415612   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:26:05.450006   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:26:05.450017   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:26:05.464362   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:26:05.464371   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:26:05.476373   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:26:05.476386   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:26:05.491074   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:26:05.491085   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:26:05.503522   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:26:05.503532   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:26:05.521316   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:26:05.521327   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:26:05.533152   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:26:05.533163   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:26:05.547029   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:26:05.547037   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:26:05.558529   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:26:05.558543   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:26:05.575714   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:26:05.575725   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:26:08.100371   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:26:13.101288   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:26:13.101709   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:26:13.141523   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:26:13.141655   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:26:13.167153   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:26:13.167258   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:26:13.181708   17343 logs.go:276] 4 containers: [425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:26:13.181787   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:26:13.193959   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:26:13.194030   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:26:13.204442   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:26:13.204504   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:26:13.215334   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:26:13.215394   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:26:13.225300   17343 logs.go:276] 0 containers: []
	W0304 04:26:13.225315   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:26:13.225374   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:26:13.235432   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:26:13.235448   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:26:13.235453   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:26:13.251903   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:26:13.251913   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:26:13.263933   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:26:13.263945   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:26:13.281565   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:26:13.281576   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:26:13.285947   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:26:13.285952   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:26:13.300069   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:26:13.300081   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:26:13.312362   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:26:13.312374   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:26:13.324582   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:26:13.324591   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:26:13.360163   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:26:13.360175   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:26:13.371808   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:26:13.371817   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:26:13.386897   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:26:13.386908   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:26:13.409808   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:26:13.409814   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:26:13.442162   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:26:13.442169   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:26:13.455551   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:26:13.455560   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:26:13.473953   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:26:13.473964   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:26:15.987318   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:26:20.990039   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:26:20.990475   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:26:21.032152   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:26:21.032272   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:26:21.054019   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:26:21.054135   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:26:21.069718   17343 logs.go:276] 4 containers: [425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:26:21.069802   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:26:21.083232   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:26:21.083308   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:26:21.098737   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:26:21.098807   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:26:21.109469   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:26:21.109529   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:26:21.120794   17343 logs.go:276] 0 containers: []
	W0304 04:26:21.120808   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:26:21.120861   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:26:21.136136   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:26:21.136152   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:26:21.136158   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:26:21.160898   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:26:21.160910   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:26:21.178488   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:26:21.178500   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:26:21.190267   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:26:21.190279   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:26:21.202518   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:26:21.202531   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:26:21.217107   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:26:21.217117   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:26:21.229111   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:26:21.229119   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:26:21.240487   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:26:21.240496   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:26:21.252486   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:26:21.252498   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:26:21.285766   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:26:21.285774   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:26:21.297359   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:26:21.297371   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:26:21.309375   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:26:21.309387   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:26:21.343304   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:26:21.343318   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:26:21.347478   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:26:21.347487   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:26:21.365097   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:26:21.365109   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:26:23.882832   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:26:28.885100   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:26:28.885513   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:26:28.924696   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:26:28.924826   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:26:28.946347   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:26:28.946457   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:26:28.964736   17343 logs.go:276] 4 containers: [425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:26:28.964813   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:26:28.976784   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:26:28.976855   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:26:28.992087   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:26:28.992155   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:26:29.006861   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:26:29.006925   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:26:29.019939   17343 logs.go:276] 0 containers: []
	W0304 04:26:29.019949   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:26:29.020007   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:26:29.030933   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:26:29.030951   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:26:29.030956   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:26:29.048672   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:26:29.048683   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:26:29.067148   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:26:29.067161   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:26:29.071144   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:26:29.071155   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:26:29.104990   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:26:29.105001   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:26:29.117663   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:26:29.117677   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:26:29.132074   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:26:29.132088   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:26:29.146443   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:26:29.146455   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:26:29.158548   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:26:29.158561   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:26:29.175336   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:26:29.175350   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:26:29.199519   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:26:29.199530   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:26:29.211171   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:26:29.211182   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:26:29.245976   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:26:29.245992   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:26:29.258456   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:26:29.258468   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:26:29.270735   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:26:29.270746   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:26:31.785343   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:26:36.787446   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:26:36.787510   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:26:36.798893   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:26:36.798969   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:26:36.810296   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:26:36.810362   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:26:36.821022   17343 logs.go:276] 4 containers: [425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:26:36.821087   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:26:36.831500   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:26:36.831559   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:26:36.841853   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:26:36.841912   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:26:36.852543   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:26:36.852596   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:26:36.862336   17343 logs.go:276] 0 containers: []
	W0304 04:26:36.862347   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:26:36.862403   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:26:36.872944   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:26:36.872973   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:26:36.872979   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:26:36.905102   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:26:36.905108   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:26:36.920177   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:26:36.920190   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:26:36.943364   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:26:36.943371   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:26:36.957977   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:26:36.957990   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:26:36.969834   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:26:36.969845   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:26:36.981772   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:26:36.981782   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:26:36.997688   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:26:36.997698   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:26:37.009044   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:26:37.009055   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:26:37.020398   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:26:37.020407   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:26:37.032143   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:26:37.032153   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:26:37.049573   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:26:37.049582   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:26:37.060731   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:26:37.060743   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:26:37.065265   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:26:37.065273   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:26:37.100517   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:26:37.100528   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:26:39.617694   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:26:44.620447   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:26:44.620746   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:26:44.656038   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:26:44.656162   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:26:44.679733   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:26:44.679846   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:26:44.697115   17343 logs.go:276] 4 containers: [425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:26:44.697192   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:26:44.710870   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:26:44.710930   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:26:44.722824   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:26:44.722886   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:26:44.733030   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:26:44.733102   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:26:44.743219   17343 logs.go:276] 0 containers: []
	W0304 04:26:44.743231   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:26:44.743280   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:26:44.753730   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:26:44.753744   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:26:44.753749   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:26:44.768175   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:26:44.768186   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:26:44.781777   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:26:44.781790   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:26:44.794149   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:26:44.794163   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:26:44.825379   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:26:44.825390   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:26:44.844485   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:26:44.844496   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:26:44.867517   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:26:44.867524   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:26:44.878595   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:26:44.878603   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:26:44.910424   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:26:44.910432   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:26:44.914418   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:26:44.914423   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:26:44.950934   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:26:44.950945   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:26:44.967863   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:26:44.967873   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:26:44.979402   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:26:44.979416   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:26:44.991389   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:26:44.991403   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:26:45.007916   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:26:45.007928   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:26:47.521176   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:26:52.523433   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:26:52.523915   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:26:52.561378   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:26:52.561508   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:26:52.582819   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:26:52.582944   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:26:52.597912   17343 logs.go:276] 4 containers: [425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:26:52.597993   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:26:52.610065   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:26:52.610124   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:26:52.620758   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:26:52.620824   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:26:52.631213   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:26:52.631267   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:26:52.641872   17343 logs.go:276] 0 containers: []
	W0304 04:26:52.641883   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:26:52.641940   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:26:52.652892   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:26:52.652910   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:26:52.652915   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:26:52.666147   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:26:52.666160   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:26:52.688001   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:26:52.688012   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:26:52.720935   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:26:52.720945   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:26:52.756033   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:26:52.756046   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:26:52.770107   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:26:52.770121   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:26:52.783729   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:26:52.783741   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:26:52.796226   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:26:52.796240   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:26:52.810940   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:26:52.810954   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:26:52.827582   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:26:52.827594   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:26:52.838957   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:26:52.838968   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:26:52.850587   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:26:52.850599   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:26:52.854809   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:26:52.854817   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:26:52.869716   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:26:52.869726   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:26:52.881572   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:26:52.881583   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:26:55.407043   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:27:00.409446   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:27:00.409537   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:27:00.420856   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:27:00.420912   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:27:00.437165   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:27:00.437215   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:27:00.449494   17343 logs.go:276] 4 containers: [425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:27:00.449553   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:27:00.460703   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:27:00.460750   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:27:00.472298   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:27:00.472364   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:27:00.485315   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:27:00.485368   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:27:00.496043   17343 logs.go:276] 0 containers: []
	W0304 04:27:00.496053   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:27:00.496097   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:27:00.506521   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:27:00.506541   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:27:00.506547   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:27:00.521587   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:27:00.521601   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:27:00.536583   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:27:00.536593   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:27:00.549370   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:27:00.549379   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:27:00.566138   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:27:00.566148   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:27:00.585718   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:27:00.585727   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:27:00.618622   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:27:00.618645   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:27:00.656924   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:27:00.656935   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:27:00.673921   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:27:00.673932   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:27:00.688246   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:27:00.688256   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:27:00.700906   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:27:00.700916   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:27:00.714991   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:27:00.715006   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:27:00.719735   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:27:00.719743   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:27:00.732040   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:27:00.732052   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:27:00.745457   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:27:00.745467   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:27:03.273667   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:27:08.275173   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:27:08.275497   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:27:08.305531   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:27:08.305647   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:27:08.323295   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:27:08.323406   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:27:08.344261   17343 logs.go:276] 4 containers: [425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:27:08.344331   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:27:08.355238   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:27:08.355293   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:27:08.365976   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:27:08.366035   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:27:08.378412   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:27:08.378454   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:27:08.395334   17343 logs.go:276] 0 containers: []
	W0304 04:27:08.395348   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:27:08.395425   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:27:08.419175   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:27:08.419199   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:27:08.419206   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:27:08.439519   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:27:08.439532   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:27:08.453074   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:27:08.453087   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	I0304 04:27:08.466917   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:27:08.466931   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:27:08.492645   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:27:08.492662   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:27:08.498188   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:27:08.498200   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:27:08.536031   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:27:08.536042   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:27:08.548276   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:27:08.548288   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:27:08.562984   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:27:08.562998   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:27:08.578344   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:27:08.578355   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:27:08.612347   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:27:08.612356   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:27:08.626525   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:27:08.626540   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:27:08.643974   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:27:08.643985   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:27:08.656328   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:27:08.656337   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:27:08.669022   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:27:08.669034   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:27:11.181310   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:27:16.183434   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:27:16.183721   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:27:16.211995   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:27:16.212125   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:27:16.229696   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:27:16.229787   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:27:16.243977   17343 logs.go:276] 6 containers: [c41d4f8f50a7 154c8d890f81 425b52dd1b06 e2ff2da9b509 52c78c839fc7 97f2e9ac37d2]
	I0304 04:27:16.244056   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:27:16.255470   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:27:16.255538   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:27:16.265738   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:27:16.265806   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:27:16.278812   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:27:16.278884   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:27:16.289238   17343 logs.go:276] 0 containers: []
	W0304 04:27:16.289251   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:27:16.289306   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:27:16.299512   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:27:16.299524   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:27:16.299530   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:27:16.303726   17343 logs.go:123] Gathering logs for coredns [52c78c839fc7] ...
	I0304 04:27:16.303732   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c78c839fc7"
	I0304 04:27:16.318987   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:27:16.318999   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:27:16.343146   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:27:16.343160   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:27:16.374446   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:27:16.374453   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:27:16.388467   17343 logs.go:123] Gathering logs for coredns [154c8d890f81] ...
	I0304 04:27:16.388480   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154c8d890f81"
	I0304 04:27:16.405005   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:27:16.405018   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:27:16.428096   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:27:16.428108   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:27:16.444432   17343 logs.go:123] Gathering logs for coredns [97f2e9ac37d2] ...
	I0304 04:27:16.444446   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2"
	W0304 04:27:16.454784   17343 logs.go:130] failed coredns [97f2e9ac37d2]: command: /bin/bash -c "docker logs --tail 400 97f2e9ac37d2" /bin/bash -c "docker logs --tail 400 97f2e9ac37d2": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 97f2e9ac37d2
	 output: 
	** stderr ** 
	Error: No such container: 97f2e9ac37d2
	
	** /stderr **
	I0304 04:27:16.454795   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:27:16.454800   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:27:16.478474   17343 logs.go:123] Gathering logs for coredns [c41d4f8f50a7] ...
	I0304 04:27:16.478484   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41d4f8f50a7"
	I0304 04:27:16.489393   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:27:16.489406   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:27:16.501412   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:27:16.501425   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:27:16.513039   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:27:16.513051   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:27:16.546998   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:27:16.547009   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:27:16.561388   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:27:16.561398   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:27:16.575709   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:27:16.575720   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:27:19.093918   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:27:24.096586   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:27:24.096666   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:27:24.108512   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:27:24.108570   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:27:24.119904   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:27:24.119978   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:27:24.131470   17343 logs.go:276] 4 containers: [c41d4f8f50a7 154c8d890f81 425b52dd1b06 e2ff2da9b509]
	I0304 04:27:24.131536   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:27:24.144377   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:27:24.144435   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:27:24.155354   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:27:24.155407   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:27:24.168565   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:27:24.168625   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:27:24.194912   17343 logs.go:276] 0 containers: []
	W0304 04:27:24.194920   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:27:24.194958   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:27:24.205873   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:27:24.205893   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:27:24.205899   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:27:24.223010   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:27:24.223029   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:27:24.256908   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:27:24.256930   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:27:24.261802   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:27:24.261814   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:27:24.277998   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:27:24.278008   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:27:24.293997   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:27:24.294008   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:27:24.306127   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:27:24.306135   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:27:24.318505   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:27:24.318517   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:27:24.331067   17343 logs.go:123] Gathering logs for coredns [154c8d890f81] ...
	I0304 04:27:24.331083   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154c8d890f81"
	I0304 04:27:24.343849   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:27:24.343858   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:27:24.359215   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:27:24.359230   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:27:24.379170   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:27:24.379186   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:27:24.399792   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:27:24.399802   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:27:24.439053   17343 logs.go:123] Gathering logs for coredns [c41d4f8f50a7] ...
	I0304 04:27:24.439062   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41d4f8f50a7"
	I0304 04:27:24.451899   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:27:24.451910   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:27:26.977510   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:27:31.979461   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:27:31.979907   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:27:32.021106   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:27:32.021241   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:27:32.042626   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:27:32.042741   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:27:32.057829   17343 logs.go:276] 4 containers: [c41d4f8f50a7 154c8d890f81 425b52dd1b06 e2ff2da9b509]
	I0304 04:27:32.057902   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:27:32.070479   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:27:32.070576   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:27:32.081452   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:27:32.081518   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:27:32.091762   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:27:32.091828   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:27:32.102650   17343 logs.go:276] 0 containers: []
	W0304 04:27:32.102661   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:27:32.102721   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:27:32.112811   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:27:32.112829   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:27:32.112834   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:27:32.126709   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:27:32.126720   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:27:32.148754   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:27:32.148762   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:27:32.160519   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:27:32.160533   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:27:32.173444   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:27:32.173454   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:27:32.205981   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:27:32.205990   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:27:32.210262   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:27:32.210267   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:27:32.224566   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:27:32.224575   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:27:32.236524   17343 logs.go:123] Gathering logs for coredns [c41d4f8f50a7] ...
	I0304 04:27:32.236534   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41d4f8f50a7"
	I0304 04:27:32.247690   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:27:32.247700   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:27:32.260102   17343 logs.go:123] Gathering logs for coredns [154c8d890f81] ...
	I0304 04:27:32.260113   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154c8d890f81"
	I0304 04:27:32.272027   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:27:32.272036   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:27:32.292599   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:27:32.292608   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:27:32.305556   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:27:32.305566   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:27:32.340305   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:27:32.340316   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:27:34.856581   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:27:39.859212   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:27:39.859615   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0304 04:27:39.902216   17343 logs.go:276] 1 containers: [a51ed72b35aa]
	I0304 04:27:39.902327   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0304 04:27:39.925522   17343 logs.go:276] 1 containers: [cc26a20e8db4]
	I0304 04:27:39.925624   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0304 04:27:39.943444   17343 logs.go:276] 4 containers: [c41d4f8f50a7 154c8d890f81 425b52dd1b06 e2ff2da9b509]
	I0304 04:27:39.943522   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0304 04:27:39.955312   17343 logs.go:276] 1 containers: [081e7d3eac80]
	I0304 04:27:39.955385   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0304 04:27:39.965647   17343 logs.go:276] 1 containers: [419aa2964728]
	I0304 04:27:39.965715   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0304 04:27:39.976913   17343 logs.go:276] 1 containers: [750e2426d83a]
	I0304 04:27:39.976985   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0304 04:27:39.987807   17343 logs.go:276] 0 containers: []
	W0304 04:27:39.987819   17343 logs.go:278] No container was found matching "kindnet"
	I0304 04:27:39.987891   17343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0304 04:27:39.998278   17343 logs.go:276] 1 containers: [ddaeb11f5ad4]
	I0304 04:27:39.998293   17343 logs.go:123] Gathering logs for storage-provisioner [ddaeb11f5ad4] ...
	I0304 04:27:39.998299   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddaeb11f5ad4"
	I0304 04:27:40.010445   17343 logs.go:123] Gathering logs for dmesg ...
	I0304 04:27:40.010459   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0304 04:27:40.014553   17343 logs.go:123] Gathering logs for describe nodes ...
	I0304 04:27:40.014559   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0304 04:27:40.049206   17343 logs.go:123] Gathering logs for kube-apiserver [a51ed72b35aa] ...
	I0304 04:27:40.049217   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a51ed72b35aa"
	I0304 04:27:40.063635   17343 logs.go:123] Gathering logs for kube-proxy [419aa2964728] ...
	I0304 04:27:40.063648   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 419aa2964728"
	I0304 04:27:40.076503   17343 logs.go:123] Gathering logs for kube-controller-manager [750e2426d83a] ...
	I0304 04:27:40.076516   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 750e2426d83a"
	I0304 04:27:40.094689   17343 logs.go:123] Gathering logs for Docker ...
	I0304 04:27:40.094702   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0304 04:27:40.117200   17343 logs.go:123] Gathering logs for kubelet ...
	I0304 04:27:40.117207   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0304 04:27:40.150405   17343 logs.go:123] Gathering logs for coredns [425b52dd1b06] ...
	I0304 04:27:40.150416   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425b52dd1b06"
	I0304 04:27:40.163891   17343 logs.go:123] Gathering logs for kube-scheduler [081e7d3eac80] ...
	I0304 04:27:40.163906   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081e7d3eac80"
	I0304 04:27:40.179175   17343 logs.go:123] Gathering logs for etcd [cc26a20e8db4] ...
	I0304 04:27:40.179185   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc26a20e8db4"
	I0304 04:27:40.192591   17343 logs.go:123] Gathering logs for coredns [e2ff2da9b509] ...
	I0304 04:27:40.192600   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2ff2da9b509"
	I0304 04:27:40.205065   17343 logs.go:123] Gathering logs for coredns [c41d4f8f50a7] ...
	I0304 04:27:40.205073   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41d4f8f50a7"
	I0304 04:27:40.217606   17343 logs.go:123] Gathering logs for coredns [154c8d890f81] ...
	I0304 04:27:40.217616   17343 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154c8d890f81"
	I0304 04:27:40.229551   17343 logs.go:123] Gathering logs for container status ...
	I0304 04:27:40.229561   17343 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0304 04:27:42.743603   17343 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0304 04:27:47.745776   17343 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0304 04:27:47.750073   17343 out.go:177] 
	W0304 04:27:47.754002   17343 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0304 04:27:47.754012   17343 out.go:239] * 
	* 
	W0304 04:27:47.754427   17343 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:27:47.766014   17343 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-289000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (611.09s)

                                                
                                    
x
+
TestPause/serial/Start (10.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-719000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-719000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.9910665s)

                                                
                                                
-- stdout --
	* [pause-719000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-719000 in cluster pause-719000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-719000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-719000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-719000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-719000 -n pause-719000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-719000 -n pause-719000: exit status 7 (57.355625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-719000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-980000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-980000 --driver=qemu2 : exit status 80 (9.833471916s)

                                                
                                                
-- stdout --
	* [NoKubernetes-980000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-980000 in cluster NoKubernetes-980000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-980000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-980000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-980000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000: exit status 7 (50.554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --driver=qemu2 : exit status 80 (5.837215042s)

                                                
                                                
-- stdout --
	* [NoKubernetes-980000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-980000
	* Restarting existing qemu2 VM for "NoKubernetes-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000: exit status 7 (60.754333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --driver=qemu2 : exit status 80 (5.834042334s)

                                                
                                                
-- stdout --
	* [NoKubernetes-980000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-980000
	* Restarting existing qemu2 VM for "NoKubernetes-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000: exit status 7 (47.913792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-980000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-980000 --driver=qemu2 : exit status 80 (5.830517667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-980000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-980000
	* Restarting existing qemu2 VM for "NoKubernetes-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-980000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000: exit status 7 (57.395166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.956134792s)

                                                
                                                
-- stdout --
	* [auto-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-315000 in cluster auto-315000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:26:23.953981   17701 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:26:23.954110   17701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:26:23.954113   17701 out.go:304] Setting ErrFile to fd 2...
	I0304 04:26:23.954115   17701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:26:23.954246   17701 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:26:23.955302   17701 out.go:298] Setting JSON to false
	I0304 04:26:23.971389   17701 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10555,"bootTime":1709544628,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:26:23.971458   17701 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:26:23.977433   17701 out.go:177] * [auto-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:26:23.984370   17701 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:26:23.984443   17701 notify.go:220] Checking for updates...
	I0304 04:26:23.987429   17701 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:26:23.990438   17701 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:26:23.993353   17701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:26:23.996412   17701 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:26:23.999376   17701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:26:24.002734   17701 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:26:24.002797   17701 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:26:24.002846   17701 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:26:24.007319   17701 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:26:24.014359   17701 start.go:299] selected driver: qemu2
	I0304 04:26:24.014366   17701 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:26:24.014374   17701 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:26:24.016654   17701 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:26:24.020401   17701 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:26:24.023420   17701 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:26:24.023455   17701 cni.go:84] Creating CNI manager for ""
	I0304 04:26:24.023462   17701 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:26:24.023466   17701 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:26:24.023472   17701 start_flags.go:323] config:
	{Name:auto-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0304 04:26:24.027997   17701 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:26:24.035369   17701 out.go:177] * Starting control plane node auto-315000 in cluster auto-315000
	I0304 04:26:24.039331   17701 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:26:24.039344   17701 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:26:24.039352   17701 cache.go:56] Caching tarball of preloaded images
	I0304 04:26:24.039397   17701 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:26:24.039402   17701 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:26:24.039463   17701 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/auto-315000/config.json ...
	I0304 04:26:24.039473   17701 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/auto-315000/config.json: {Name:mk5494b0d9ba5fe2fdafb44211ad92d72aa1d904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:26:24.039667   17701 start.go:365] acquiring machines lock for auto-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:26:24.039694   17701 start.go:369] acquired machines lock for "auto-315000" in 22.417µs
	I0304 04:26:24.039704   17701 start.go:93] Provisioning new machine with config: &{Name:auto-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:auto-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:26:24.039734   17701 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:26:24.048387   17701 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:26:24.062755   17701 start.go:159] libmachine.API.Create for "auto-315000" (driver="qemu2")
	I0304 04:26:24.062776   17701 client.go:168] LocalClient.Create starting
	I0304 04:26:24.062835   17701 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:26:24.062865   17701 main.go:141] libmachine: Decoding PEM data...
	I0304 04:26:24.062875   17701 main.go:141] libmachine: Parsing certificate...
	I0304 04:26:24.062915   17701 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:26:24.062936   17701 main.go:141] libmachine: Decoding PEM data...
	I0304 04:26:24.062945   17701 main.go:141] libmachine: Parsing certificate...
	I0304 04:26:24.063263   17701 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:26:24.206782   17701 main.go:141] libmachine: Creating SSH key...
	I0304 04:26:24.397506   17701 main.go:141] libmachine: Creating Disk image...
	I0304 04:26:24.397517   17701 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:26:24.397849   17701 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/disk.qcow2
	I0304 04:26:24.410584   17701 main.go:141] libmachine: STDOUT: 
	I0304 04:26:24.410613   17701 main.go:141] libmachine: STDERR: 
	I0304 04:26:24.410690   17701 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/disk.qcow2 +20000M
	I0304 04:26:24.421597   17701 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:26:24.421621   17701 main.go:141] libmachine: STDERR: 
	I0304 04:26:24.421640   17701 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/disk.qcow2
	I0304 04:26:24.421644   17701 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:26:24.421675   17701 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:9f:09:c3:1a:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/disk.qcow2
	I0304 04:26:24.423379   17701 main.go:141] libmachine: STDOUT: 
	I0304 04:26:24.423401   17701 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:26:24.423423   17701 client.go:171] LocalClient.Create took 360.6445ms
	I0304 04:26:26.425543   17701 start.go:128] duration metric: createHost completed in 2.385813s
	I0304 04:26:26.425575   17701 start.go:83] releasing machines lock for "auto-315000", held for 2.385889458s
	W0304 04:26:26.425626   17701 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:26:26.435541   17701 out.go:177] * Deleting "auto-315000" in qemu2 ...
	W0304 04:26:26.457117   17701 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:26:26.457130   17701 start.go:709] Will try again in 5 seconds ...
	I0304 04:26:31.459334   17701 start.go:365] acquiring machines lock for auto-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:26:31.459856   17701 start.go:369] acquired machines lock for "auto-315000" in 390.416µs
	I0304 04:26:31.459913   17701 start.go:93] Provisioning new machine with config: &{Name:auto-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:auto-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:26:31.460149   17701 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:26:31.470703   17701 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:26:31.515958   17701 start.go:159] libmachine.API.Create for "auto-315000" (driver="qemu2")
	I0304 04:26:31.516020   17701 client.go:168] LocalClient.Create starting
	I0304 04:26:31.516140   17701 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:26:31.516200   17701 main.go:141] libmachine: Decoding PEM data...
	I0304 04:26:31.516214   17701 main.go:141] libmachine: Parsing certificate...
	I0304 04:26:31.516274   17701 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:26:31.516315   17701 main.go:141] libmachine: Decoding PEM data...
	I0304 04:26:31.516331   17701 main.go:141] libmachine: Parsing certificate...
	I0304 04:26:31.516820   17701 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:26:31.761855   17701 main.go:141] libmachine: Creating SSH key...
	I0304 04:26:31.813145   17701 main.go:141] libmachine: Creating Disk image...
	I0304 04:26:31.813150   17701 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:26:31.813323   17701 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/disk.qcow2
	I0304 04:26:31.825633   17701 main.go:141] libmachine: STDOUT: 
	I0304 04:26:31.825662   17701 main.go:141] libmachine: STDERR: 
	I0304 04:26:31.825720   17701 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/disk.qcow2 +20000M
	I0304 04:26:31.837129   17701 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:26:31.837146   17701 main.go:141] libmachine: STDERR: 
	I0304 04:26:31.837160   17701 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/disk.qcow2
	I0304 04:26:31.837164   17701 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:26:31.837192   17701 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:cf:79:b6:de:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/auto-315000/disk.qcow2
	I0304 04:26:31.838996   17701 main.go:141] libmachine: STDOUT: 
	I0304 04:26:31.839011   17701 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:26:31.839025   17701 client.go:171] LocalClient.Create took 322.999958ms
	I0304 04:26:33.841365   17701 start.go:128] duration metric: createHost completed in 2.381149083s
	I0304 04:26:33.841463   17701 start.go:83] releasing machines lock for "auto-315000", held for 2.381598042s
	W0304 04:26:33.841861   17701 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:26:33.850454   17701 out.go:177] 
	W0304 04:26:33.853410   17701 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:26:33.853444   17701 out.go:239] * 
	* 
	W0304 04:26:33.856184   17701 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:26:33.866399   17701 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.023759875s)

                                                
                                                
-- stdout --
	* [kindnet-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-315000 in cluster kindnet-315000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:26:36.267965   17818 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:26:36.268087   17818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:26:36.268090   17818 out.go:304] Setting ErrFile to fd 2...
	I0304 04:26:36.268092   17818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:26:36.268212   17818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:26:36.269291   17818 out.go:298] Setting JSON to false
	I0304 04:26:36.285693   17818 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10568,"bootTime":1709544628,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:26:36.285769   17818 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:26:36.290187   17818 out.go:177] * [kindnet-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:26:36.299073   17818 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:26:36.299121   17818 notify.go:220] Checking for updates...
	I0304 04:26:36.303539   17818 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:26:36.307090   17818 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:26:36.310086   17818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:26:36.313110   17818 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:26:36.316089   17818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:26:36.319464   17818 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:26:36.319532   17818 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:26:36.319594   17818 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:26:36.324062   17818 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:26:36.331064   17818 start.go:299] selected driver: qemu2
	I0304 04:26:36.331071   17818 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:26:36.331083   17818 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:26:36.333444   17818 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:26:36.337058   17818 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:26:36.340185   17818 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:26:36.340240   17818 cni.go:84] Creating CNI manager for "kindnet"
	I0304 04:26:36.340247   17818 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0304 04:26:36.340257   17818 start_flags.go:323] config:
	{Name:kindnet-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:26:36.344755   17818 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:26:36.351919   17818 out.go:177] * Starting control plane node kindnet-315000 in cluster kindnet-315000
	I0304 04:26:36.356019   17818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:26:36.356032   17818 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:26:36.356038   17818 cache.go:56] Caching tarball of preloaded images
	I0304 04:26:36.356080   17818 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:26:36.356084   17818 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:26:36.356136   17818 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/kindnet-315000/config.json ...
	I0304 04:26:36.356145   17818 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/kindnet-315000/config.json: {Name:mk554e8445d77467ee2bdb67d37c09a8af9a653a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:26:36.356344   17818 start.go:365] acquiring machines lock for kindnet-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:26:36.356370   17818 start.go:369] acquired machines lock for "kindnet-315000" in 21.875µs
	I0304 04:26:36.356380   17818 start.go:93] Provisioning new machine with config: &{Name:kindnet-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:26:36.356409   17818 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:26:36.365076   17818 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:26:36.380378   17818 start.go:159] libmachine.API.Create for "kindnet-315000" (driver="qemu2")
	I0304 04:26:36.380404   17818 client.go:168] LocalClient.Create starting
	I0304 04:26:36.380474   17818 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:26:36.380505   17818 main.go:141] libmachine: Decoding PEM data...
	I0304 04:26:36.380513   17818 main.go:141] libmachine: Parsing certificate...
	I0304 04:26:36.380554   17818 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:26:36.380576   17818 main.go:141] libmachine: Decoding PEM data...
	I0304 04:26:36.380583   17818 main.go:141] libmachine: Parsing certificate...
	I0304 04:26:36.380952   17818 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:26:36.522490   17818 main.go:141] libmachine: Creating SSH key...
	I0304 04:26:36.776880   17818 main.go:141] libmachine: Creating Disk image...
	I0304 04:26:36.776889   17818 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:26:36.777066   17818 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/disk.qcow2
	I0304 04:26:36.790426   17818 main.go:141] libmachine: STDOUT: 
	I0304 04:26:36.790458   17818 main.go:141] libmachine: STDERR: 
	I0304 04:26:36.790516   17818 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/disk.qcow2 +20000M
	I0304 04:26:36.802974   17818 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:26:36.802994   17818 main.go:141] libmachine: STDERR: 
	I0304 04:26:36.803017   17818 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/disk.qcow2
	I0304 04:26:36.803021   17818 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:26:36.803049   17818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:51:cc:06:b1:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/disk.qcow2
	I0304 04:26:36.805213   17818 main.go:141] libmachine: STDOUT: 
	I0304 04:26:36.805230   17818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:26:36.805247   17818 client.go:171] LocalClient.Create took 424.836208ms
	I0304 04:26:38.807512   17818 start.go:128] duration metric: createHost completed in 2.451081333s
	I0304 04:26:38.807615   17818 start.go:83] releasing machines lock for "kindnet-315000", held for 2.451250209s
	W0304 04:26:38.807737   17818 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:26:38.817876   17818 out.go:177] * Deleting "kindnet-315000" in qemu2 ...
	W0304 04:26:38.851062   17818 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:26:38.851101   17818 start.go:709] Will try again in 5 seconds ...
	I0304 04:26:43.853261   17818 start.go:365] acquiring machines lock for kindnet-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:26:43.853771   17818 start.go:369] acquired machines lock for "kindnet-315000" in 401.667µs
	I0304 04:26:43.853952   17818 start.go:93] Provisioning new machine with config: &{Name:kindnet-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:26:43.854293   17818 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:26:43.860014   17818 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:26:43.909985   17818 start.go:159] libmachine.API.Create for "kindnet-315000" (driver="qemu2")
	I0304 04:26:43.910041   17818 client.go:168] LocalClient.Create starting
	I0304 04:26:43.910166   17818 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:26:43.910228   17818 main.go:141] libmachine: Decoding PEM data...
	I0304 04:26:43.910245   17818 main.go:141] libmachine: Parsing certificate...
	I0304 04:26:43.910315   17818 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:26:43.910356   17818 main.go:141] libmachine: Decoding PEM data...
	I0304 04:26:43.910369   17818 main.go:141] libmachine: Parsing certificate...
	I0304 04:26:43.911043   17818 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:26:44.067047   17818 main.go:141] libmachine: Creating SSH key...
	I0304 04:26:44.190975   17818 main.go:141] libmachine: Creating Disk image...
	I0304 04:26:44.190984   17818 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:26:44.191177   17818 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/disk.qcow2
	I0304 04:26:44.204139   17818 main.go:141] libmachine: STDOUT: 
	I0304 04:26:44.204161   17818 main.go:141] libmachine: STDERR: 
	I0304 04:26:44.204224   17818 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/disk.qcow2 +20000M
	I0304 04:26:44.215434   17818 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:26:44.215456   17818 main.go:141] libmachine: STDERR: 
	I0304 04:26:44.215467   17818 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/disk.qcow2
	I0304 04:26:44.215472   17818 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:26:44.215508   17818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e8:14:83:01:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kindnet-315000/disk.qcow2
	I0304 04:26:44.217357   17818 main.go:141] libmachine: STDOUT: 
	I0304 04:26:44.217374   17818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:26:44.217387   17818 client.go:171] LocalClient.Create took 307.339666ms
	I0304 04:26:46.218857   17818 start.go:128] duration metric: createHost completed in 2.364558917s
	I0304 04:26:46.218898   17818 start.go:83] releasing machines lock for "kindnet-315000", held for 2.365101708s
	W0304 04:26:46.219070   17818 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:26:46.229620   17818 out.go:177] 
	W0304 04:26:46.236647   17818 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:26:46.236661   17818 out.go:239] * 
	* 
	W0304 04:26:46.237710   17818 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:26:46.252579   17818 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.846950833s)

                                                
                                                
-- stdout --
	* [calico-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-315000 in cluster calico-315000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:26:48.643063   17939 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:26:48.643212   17939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:26:48.643216   17939 out.go:304] Setting ErrFile to fd 2...
	I0304 04:26:48.643218   17939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:26:48.643342   17939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:26:48.644443   17939 out.go:298] Setting JSON to false
	I0304 04:26:48.660988   17939 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10580,"bootTime":1709544628,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:26:48.661056   17939 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:26:48.667030   17939 out.go:177] * [calico-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:26:48.674926   17939 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:26:48.679922   17939 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:26:48.674981   17939 notify.go:220] Checking for updates...
	I0304 04:26:48.686884   17939 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:26:48.694906   17939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:26:48.702886   17939 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:26:48.705916   17939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:26:48.709219   17939 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:26:48.709287   17939 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:26:48.709338   17939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:26:48.712884   17939 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:26:48.719895   17939 start.go:299] selected driver: qemu2
	I0304 04:26:48.719901   17939 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:26:48.719908   17939 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:26:48.722021   17939 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:26:48.724862   17939 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:26:48.727954   17939 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:26:48.727984   17939 cni.go:84] Creating CNI manager for "calico"
	I0304 04:26:48.727992   17939 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I0304 04:26:48.727999   17939 start_flags.go:323] config:
	{Name:calico-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:26:48.732333   17939 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:26:48.739925   17939 out.go:177] * Starting control plane node calico-315000 in cluster calico-315000
	I0304 04:26:48.743821   17939 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:26:48.743843   17939 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:26:48.743858   17939 cache.go:56] Caching tarball of preloaded images
	I0304 04:26:48.743926   17939 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:26:48.743932   17939 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:26:48.743998   17939 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/calico-315000/config.json ...
	I0304 04:26:48.744007   17939 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/calico-315000/config.json: {Name:mkd9f688b9db9e90f81d4748f4d412c26397d082 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:26:48.744192   17939 start.go:365] acquiring machines lock for calico-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:26:48.744220   17939 start.go:369] acquired machines lock for "calico-315000" in 22.958µs
	I0304 04:26:48.744230   17939 start.go:93] Provisioning new machine with config: &{Name:calico-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:calico-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:26:48.744278   17939 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:26:48.750912   17939 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:26:48.765072   17939 start.go:159] libmachine.API.Create for "calico-315000" (driver="qemu2")
	I0304 04:26:48.765101   17939 client.go:168] LocalClient.Create starting
	I0304 04:26:48.765168   17939 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:26:48.765211   17939 main.go:141] libmachine: Decoding PEM data...
	I0304 04:26:48.765225   17939 main.go:141] libmachine: Parsing certificate...
	I0304 04:26:48.765255   17939 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:26:48.765275   17939 main.go:141] libmachine: Decoding PEM data...
	I0304 04:26:48.765283   17939 main.go:141] libmachine: Parsing certificate...
	I0304 04:26:48.765645   17939 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:26:48.909133   17939 main.go:141] libmachine: Creating SSH key...
	I0304 04:26:49.001288   17939 main.go:141] libmachine: Creating Disk image...
	I0304 04:26:49.001297   17939 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:26:49.001452   17939 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/disk.qcow2
	I0304 04:26:49.014389   17939 main.go:141] libmachine: STDOUT: 
	I0304 04:26:49.014414   17939 main.go:141] libmachine: STDERR: 
	I0304 04:26:49.014492   17939 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/disk.qcow2 +20000M
	I0304 04:26:49.026260   17939 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:26:49.026280   17939 main.go:141] libmachine: STDERR: 
	I0304 04:26:49.026301   17939 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/disk.qcow2
	I0304 04:26:49.026305   17939 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:26:49.026335   17939 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:42:d0:41:d8:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/disk.qcow2
	I0304 04:26:49.028264   17939 main.go:141] libmachine: STDOUT: 
	I0304 04:26:49.028281   17939 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:26:49.028297   17939 client.go:171] LocalClient.Create took 263.191042ms
	I0304 04:26:51.028933   17939 start.go:128] duration metric: createHost completed in 2.284640375s
	I0304 04:26:51.029123   17939 start.go:83] releasing machines lock for "calico-315000", held for 2.284794875s
	W0304 04:26:51.029226   17939 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:26:51.039324   17939 out.go:177] * Deleting "calico-315000" in qemu2 ...
	W0304 04:26:51.067141   17939 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:26:51.067163   17939 start.go:709] Will try again in 5 seconds ...
	I0304 04:26:56.069392   17939 start.go:365] acquiring machines lock for calico-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:26:56.069831   17939 start.go:369] acquired machines lock for "calico-315000" in 329.291µs
	I0304 04:26:56.069897   17939 start.go:93] Provisioning new machine with config: &{Name:calico-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:calico-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:26:56.070173   17939 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:26:56.079851   17939 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:26:56.129643   17939 start.go:159] libmachine.API.Create for "calico-315000" (driver="qemu2")
	I0304 04:26:56.129688   17939 client.go:168] LocalClient.Create starting
	I0304 04:26:56.129804   17939 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:26:56.129871   17939 main.go:141] libmachine: Decoding PEM data...
	I0304 04:26:56.129891   17939 main.go:141] libmachine: Parsing certificate...
	I0304 04:26:56.129967   17939 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:26:56.130008   17939 main.go:141] libmachine: Decoding PEM data...
	I0304 04:26:56.130026   17939 main.go:141] libmachine: Parsing certificate...
	I0304 04:26:56.130631   17939 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:26:56.285530   17939 main.go:141] libmachine: Creating SSH key...
	I0304 04:26:56.396369   17939 main.go:141] libmachine: Creating Disk image...
	I0304 04:26:56.396379   17939 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:26:56.396570   17939 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/disk.qcow2
	I0304 04:26:56.408989   17939 main.go:141] libmachine: STDOUT: 
	I0304 04:26:56.409015   17939 main.go:141] libmachine: STDERR: 
	I0304 04:26:56.409082   17939 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/disk.qcow2 +20000M
	I0304 04:26:56.420082   17939 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:26:56.420098   17939 main.go:141] libmachine: STDERR: 
	I0304 04:26:56.420119   17939 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/disk.qcow2
	I0304 04:26:56.420123   17939 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:26:56.420160   17939 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:2e:be:90:09:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/calico-315000/disk.qcow2
	I0304 04:26:56.421882   17939 main.go:141] libmachine: STDOUT: 
	I0304 04:26:56.421897   17939 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:26:56.421910   17939 client.go:171] LocalClient.Create took 292.215959ms
	I0304 04:26:58.424101   17939 start.go:128] duration metric: createHost completed in 2.353883584s
	I0304 04:26:58.424153   17939 start.go:83] releasing machines lock for "calico-315000", held for 2.354317042s
	W0304 04:26:58.424372   17939 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:26:58.430473   17939 out.go:177] 
	W0304 04:26:58.437505   17939 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:26:58.437521   17939 out.go:239] * 
	* 
	W0304 04:26:58.438885   17939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:26:58.450393   17939 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.79243525s)

                                                
                                                
-- stdout --
	* [custom-flannel-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-315000 in cluster custom-flannel-315000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:27:00.985448   18065 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:27:00.985556   18065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:27:00.985559   18065 out.go:304] Setting ErrFile to fd 2...
	I0304 04:27:00.985562   18065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:27:00.985681   18065 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:27:00.986825   18065 out.go:298] Setting JSON to false
	I0304 04:27:01.003156   18065 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10592,"bootTime":1709544628,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:27:01.003324   18065 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:27:01.008539   18065 out.go:177] * [custom-flannel-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:27:01.015617   18065 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:27:01.015682   18065 notify.go:220] Checking for updates...
	I0304 04:27:01.022499   18065 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:27:01.025543   18065 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:27:01.028522   18065 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:27:01.031544   18065 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:27:01.034531   18065 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:27:01.037796   18065 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:27:01.037858   18065 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:27:01.037912   18065 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:27:01.042497   18065 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:27:01.049565   18065 start.go:299] selected driver: qemu2
	I0304 04:27:01.049573   18065 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:27:01.049579   18065 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:27:01.051939   18065 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:27:01.055548   18065 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:27:01.058664   18065 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:27:01.058696   18065 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0304 04:27:01.058704   18065 start_flags.go:318] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0304 04:27:01.058712   18065 start_flags.go:323] config:
	{Name:custom-flannel-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:27:01.063300   18065 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:27:01.070570   18065 out.go:177] * Starting control plane node custom-flannel-315000 in cluster custom-flannel-315000
	I0304 04:27:01.073493   18065 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:27:01.073508   18065 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:27:01.073517   18065 cache.go:56] Caching tarball of preloaded images
	I0304 04:27:01.073581   18065 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:27:01.073585   18065 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:27:01.073639   18065 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/custom-flannel-315000/config.json ...
	I0304 04:27:01.073659   18065 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/custom-flannel-315000/config.json: {Name:mk5c2aaf7b236228de1704d751bbe83af50a8e4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:27:01.073884   18065 start.go:365] acquiring machines lock for custom-flannel-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:27:01.073920   18065 start.go:369] acquired machines lock for "custom-flannel-315000" in 28.583µs
	I0304 04:27:01.073939   18065 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:27:01.073973   18065 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:27:01.082380   18065 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:27:01.098454   18065 start.go:159] libmachine.API.Create for "custom-flannel-315000" (driver="qemu2")
	I0304 04:27:01.098486   18065 client.go:168] LocalClient.Create starting
	I0304 04:27:01.098546   18065 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:27:01.098578   18065 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:01.098587   18065 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:01.098630   18065 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:27:01.098651   18065 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:01.098659   18065 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:01.099007   18065 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:27:01.239306   18065 main.go:141] libmachine: Creating SSH key...
	I0304 04:27:01.284700   18065 main.go:141] libmachine: Creating Disk image...
	I0304 04:27:01.284705   18065 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:27:01.284852   18065 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/disk.qcow2
	I0304 04:27:01.297096   18065 main.go:141] libmachine: STDOUT: 
	I0304 04:27:01.297120   18065 main.go:141] libmachine: STDERR: 
	I0304 04:27:01.297176   18065 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/disk.qcow2 +20000M
	I0304 04:27:01.307996   18065 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:27:01.308017   18065 main.go:141] libmachine: STDERR: 
	I0304 04:27:01.308051   18065 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/disk.qcow2
	I0304 04:27:01.308057   18065 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:27:01.308089   18065 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:37:d2:73:91:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/disk.qcow2
	I0304 04:27:01.309811   18065 main.go:141] libmachine: STDOUT: 
	I0304 04:27:01.309829   18065 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:27:01.309849   18065 client.go:171] LocalClient.Create took 211.35775ms
	I0304 04:27:03.312041   18065 start.go:128] duration metric: createHost completed in 2.238060208s
	I0304 04:27:03.312166   18065 start.go:83] releasing machines lock for "custom-flannel-315000", held for 2.238249375s
	W0304 04:27:03.312233   18065 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:03.325228   18065 out.go:177] * Deleting "custom-flannel-315000" in qemu2 ...
	W0304 04:27:03.353185   18065 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:03.353231   18065 start.go:709] Will try again in 5 seconds ...
	I0304 04:27:08.353506   18065 start.go:365] acquiring machines lock for custom-flannel-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:27:08.353602   18065 start.go:369] acquired machines lock for "custom-flannel-315000" in 76.292µs
	I0304 04:27:08.353623   18065 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:27:08.353669   18065 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:27:08.361846   18065 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:27:08.377658   18065 start.go:159] libmachine.API.Create for "custom-flannel-315000" (driver="qemu2")
	I0304 04:27:08.377691   18065 client.go:168] LocalClient.Create starting
	I0304 04:27:08.377773   18065 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:27:08.377815   18065 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:08.377824   18065 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:08.377859   18065 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:27:08.377881   18065 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:08.377889   18065 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:08.378228   18065 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:27:08.520656   18065 main.go:141] libmachine: Creating SSH key...
	I0304 04:27:08.682078   18065 main.go:141] libmachine: Creating Disk image...
	I0304 04:27:08.682092   18065 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:27:08.682315   18065 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/disk.qcow2
	I0304 04:27:08.695103   18065 main.go:141] libmachine: STDOUT: 
	I0304 04:27:08.695126   18065 main.go:141] libmachine: STDERR: 
	I0304 04:27:08.695212   18065 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/disk.qcow2 +20000M
	I0304 04:27:08.706423   18065 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:27:08.706440   18065 main.go:141] libmachine: STDERR: 
	I0304 04:27:08.706458   18065 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/disk.qcow2
	I0304 04:27:08.706466   18065 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:27:08.706513   18065 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:1a:5b:34:33:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/custom-flannel-315000/disk.qcow2
	I0304 04:27:08.708297   18065 main.go:141] libmachine: STDOUT: 
	I0304 04:27:08.708314   18065 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:27:08.708328   18065 client.go:171] LocalClient.Create took 330.635916ms
	I0304 04:27:10.710414   18065 start.go:128] duration metric: createHost completed in 2.356724167s
	I0304 04:27:10.710474   18065 start.go:83] releasing machines lock for "custom-flannel-315000", held for 2.356873s
	W0304 04:27:10.710661   18065 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:10.719991   18065 out.go:177] 
	W0304 04:27:10.726974   18065 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:27:10.726985   18065 out.go:239] * 
	* 
	W0304 04:27:10.728083   18065 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:27:10.739977   18065 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.742091541s)

                                                
                                                
-- stdout --
	* [false-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-315000 in cluster false-315000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:27:13.196075   18188 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:27:13.196214   18188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:27:13.196221   18188 out.go:304] Setting ErrFile to fd 2...
	I0304 04:27:13.196223   18188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:27:13.196374   18188 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:27:13.197440   18188 out.go:298] Setting JSON to false
	I0304 04:27:13.214137   18188 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10605,"bootTime":1709544628,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:27:13.214199   18188 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:27:13.218630   18188 out.go:177] * [false-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:27:13.226670   18188 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:27:13.229707   18188 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:27:13.226736   18188 notify.go:220] Checking for updates...
	I0304 04:27:13.235670   18188 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:27:13.238626   18188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:27:13.241669   18188 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:27:13.244668   18188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:27:13.246456   18188 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:27:13.246519   18188 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:27:13.246568   18188 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:27:13.250679   18188 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:27:13.257539   18188 start.go:299] selected driver: qemu2
	I0304 04:27:13.257543   18188 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:27:13.257549   18188 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:27:13.259863   18188 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:27:13.262646   18188 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:27:13.265814   18188 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:27:13.265859   18188 cni.go:84] Creating CNI manager for "false"
	I0304 04:27:13.265867   18188 start_flags.go:323] config:
	{Name:false-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:27:13.270099   18188 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:27:13.277682   18188 out.go:177] * Starting control plane node false-315000 in cluster false-315000
	I0304 04:27:13.281693   18188 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:27:13.281710   18188 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:27:13.281729   18188 cache.go:56] Caching tarball of preloaded images
	I0304 04:27:13.281800   18188 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:27:13.281804   18188 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:27:13.281860   18188 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/false-315000/config.json ...
	I0304 04:27:13.281869   18188 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/false-315000/config.json: {Name:mk7bf9231801c49411f24261cde93ba16e6f1fe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:27:13.282087   18188 start.go:365] acquiring machines lock for false-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:27:13.282116   18188 start.go:369] acquired machines lock for "false-315000" in 23.125µs
	I0304 04:27:13.282125   18188 start.go:93] Provisioning new machine with config: &{Name:false-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:false-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:27:13.282162   18188 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:27:13.289708   18188 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:27:13.304449   18188 start.go:159] libmachine.API.Create for "false-315000" (driver="qemu2")
	I0304 04:27:13.304481   18188 client.go:168] LocalClient.Create starting
	I0304 04:27:13.304550   18188 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:27:13.304598   18188 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:13.304611   18188 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:13.304638   18188 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:27:13.304659   18188 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:13.304668   18188 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:13.305030   18188 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:27:13.448122   18188 main.go:141] libmachine: Creating SSH key...
	I0304 04:27:13.514421   18188 main.go:141] libmachine: Creating Disk image...
	I0304 04:27:13.514428   18188 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:27:13.514609   18188 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/disk.qcow2
	I0304 04:27:13.527285   18188 main.go:141] libmachine: STDOUT: 
	I0304 04:27:13.527304   18188 main.go:141] libmachine: STDERR: 
	I0304 04:27:13.527362   18188 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/disk.qcow2 +20000M
	I0304 04:27:13.538615   18188 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:27:13.538639   18188 main.go:141] libmachine: STDERR: 
	I0304 04:27:13.538660   18188 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/disk.qcow2
	I0304 04:27:13.538666   18188 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:27:13.538720   18188 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:16:c8:08:87:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/disk.qcow2
	I0304 04:27:13.540610   18188 main.go:141] libmachine: STDOUT: 
	I0304 04:27:13.540626   18188 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:27:13.540641   18188 client.go:171] LocalClient.Create took 236.157125ms
	I0304 04:27:15.542889   18188 start.go:128] duration metric: createHost completed in 2.260709042s
	I0304 04:27:15.542974   18188 start.go:83] releasing machines lock for "false-315000", held for 2.260864083s
	W0304 04:27:15.543030   18188 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:15.551587   18188 out.go:177] * Deleting "false-315000" in qemu2 ...
	W0304 04:27:15.578834   18188 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:15.578859   18188 start.go:709] Will try again in 5 seconds ...
	I0304 04:27:20.580998   18188 start.go:365] acquiring machines lock for false-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:27:20.581259   18188 start.go:369] acquired machines lock for "false-315000" in 194.042µs
	I0304 04:27:20.581299   18188 start.go:93] Provisioning new machine with config: &{Name:false-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:false-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:27:20.581392   18188 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:27:20.591420   18188 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:27:20.624010   18188 start.go:159] libmachine.API.Create for "false-315000" (driver="qemu2")
	I0304 04:27:20.624058   18188 client.go:168] LocalClient.Create starting
	I0304 04:27:20.624142   18188 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:27:20.624185   18188 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:20.624201   18188 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:20.624253   18188 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:27:20.624284   18188 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:20.624295   18188 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:20.624694   18188 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:27:20.769732   18188 main.go:141] libmachine: Creating SSH key...
	I0304 04:27:20.825082   18188 main.go:141] libmachine: Creating Disk image...
	I0304 04:27:20.825087   18188 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:27:20.825253   18188 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/disk.qcow2
	I0304 04:27:20.837575   18188 main.go:141] libmachine: STDOUT: 
	I0304 04:27:20.837595   18188 main.go:141] libmachine: STDERR: 
	I0304 04:27:20.837653   18188 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/disk.qcow2 +20000M
	I0304 04:27:20.848727   18188 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:27:20.848753   18188 main.go:141] libmachine: STDERR: 
	I0304 04:27:20.848768   18188 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/disk.qcow2
	I0304 04:27:20.848775   18188 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:27:20.848807   18188 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:d1:47:02:a5:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/false-315000/disk.qcow2
	I0304 04:27:20.850737   18188 main.go:141] libmachine: STDOUT: 
	I0304 04:27:20.850751   18188 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:27:20.850772   18188 client.go:171] LocalClient.Create took 226.710917ms
	I0304 04:27:22.852970   18188 start.go:128] duration metric: createHost completed in 2.271542833s
	I0304 04:27:22.853070   18188 start.go:83] releasing machines lock for "false-315000", held for 2.271807292s
	W0304 04:27:22.853536   18188 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:22.869174   18188 out.go:177] 
	W0304 04:27:22.879300   18188 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:27:22.879344   18188 out.go:239] * 
	* 
	W0304 04:27:22.881832   18188 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:27:22.892194   18188 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.859789209s)

                                                
                                                
-- stdout --
	* [enable-default-cni-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-315000 in cluster enable-default-cni-315000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:27:25.220024   18302 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:27:25.220154   18302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:27:25.220158   18302 out.go:304] Setting ErrFile to fd 2...
	I0304 04:27:25.220160   18302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:27:25.220303   18302 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:27:25.221408   18302 out.go:298] Setting JSON to false
	I0304 04:27:25.238018   18302 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10617,"bootTime":1709544628,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:27:25.238087   18302 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:27:25.244195   18302 out.go:177] * [enable-default-cni-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:27:25.252169   18302 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:27:25.256162   18302 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:27:25.252212   18302 notify.go:220] Checking for updates...
	I0304 04:27:25.262224   18302 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:27:25.265155   18302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:27:25.273202   18302 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:27:25.276141   18302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:27:25.279480   18302 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:27:25.279553   18302 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:27:25.279594   18302 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:27:25.284156   18302 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:27:25.291126   18302 start.go:299] selected driver: qemu2
	I0304 04:27:25.291132   18302 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:27:25.291137   18302 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:27:25.293451   18302 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:27:25.297144   18302 out.go:177] * Automatically selected the socket_vmnet network
	E0304 04:27:25.300221   18302 start_flags.go:465] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0304 04:27:25.300233   18302 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:27:25.300272   18302 cni.go:84] Creating CNI manager for "bridge"
	I0304 04:27:25.300278   18302 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:27:25.300287   18302 start_flags.go:323] config:
	{Name:enable-default-cni-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:27:25.304759   18302 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:27:25.312164   18302 out.go:177] * Starting control plane node enable-default-cni-315000 in cluster enable-default-cni-315000
	I0304 04:27:25.316128   18302 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:27:25.316145   18302 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:27:25.316157   18302 cache.go:56] Caching tarball of preloaded images
	I0304 04:27:25.316224   18302 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:27:25.316240   18302 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:27:25.316322   18302 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/enable-default-cni-315000/config.json ...
	I0304 04:27:25.316341   18302 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/enable-default-cni-315000/config.json: {Name:mk98b45d9e395ae124e9ec61ede77dc419c7db21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:27:25.316573   18302 start.go:365] acquiring machines lock for enable-default-cni-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:27:25.316613   18302 start.go:369] acquired machines lock for "enable-default-cni-315000" in 29.958µs
	I0304 04:27:25.316626   18302 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:27:25.316660   18302 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:27:25.325129   18302 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:27:25.343824   18302 start.go:159] libmachine.API.Create for "enable-default-cni-315000" (driver="qemu2")
	I0304 04:27:25.343860   18302 client.go:168] LocalClient.Create starting
	I0304 04:27:25.343938   18302 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:27:25.343973   18302 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:25.343981   18302 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:25.344031   18302 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:27:25.344054   18302 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:25.344063   18302 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:25.344427   18302 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:27:25.487065   18302 main.go:141] libmachine: Creating SSH key...
	I0304 04:27:25.552419   18302 main.go:141] libmachine: Creating Disk image...
	I0304 04:27:25.552425   18302 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:27:25.552618   18302 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/disk.qcow2
	I0304 04:27:25.565033   18302 main.go:141] libmachine: STDOUT: 
	I0304 04:27:25.565053   18302 main.go:141] libmachine: STDERR: 
	I0304 04:27:25.565112   18302 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/disk.qcow2 +20000M
	I0304 04:27:25.576386   18302 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:27:25.576405   18302 main.go:141] libmachine: STDERR: 
	I0304 04:27:25.576417   18302 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/disk.qcow2
	I0304 04:27:25.576423   18302 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:27:25.576451   18302 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:d6:05:d5:de:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/disk.qcow2
	I0304 04:27:25.578463   18302 main.go:141] libmachine: STDOUT: 
	I0304 04:27:25.578479   18302 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:27:25.578497   18302 client.go:171] LocalClient.Create took 234.633584ms
	I0304 04:27:27.580762   18302 start.go:128] duration metric: createHost completed in 2.26408025s
	I0304 04:27:27.580895   18302 start.go:83] releasing machines lock for "enable-default-cni-315000", held for 2.264285s
	W0304 04:27:27.580995   18302 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:27.598251   18302 out.go:177] * Deleting "enable-default-cni-315000" in qemu2 ...
	W0304 04:27:27.625265   18302 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:27.625312   18302 start.go:709] Will try again in 5 seconds ...
	I0304 04:27:32.627484   18302 start.go:365] acquiring machines lock for enable-default-cni-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:27:32.627648   18302 start.go:369] acquired machines lock for "enable-default-cni-315000" in 127.625µs
	I0304 04:27:32.627694   18302 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:27:32.627792   18302 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:27:32.635870   18302 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:27:32.656190   18302 start.go:159] libmachine.API.Create for "enable-default-cni-315000" (driver="qemu2")
	I0304 04:27:32.656226   18302 client.go:168] LocalClient.Create starting
	I0304 04:27:32.656292   18302 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:27:32.656330   18302 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:32.656343   18302 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:32.656380   18302 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:27:32.656404   18302 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:32.656418   18302 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:32.656750   18302 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:27:32.797253   18302 main.go:141] libmachine: Creating SSH key...
	I0304 04:27:32.978145   18302 main.go:141] libmachine: Creating Disk image...
	I0304 04:27:32.978156   18302 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:27:32.978387   18302 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/disk.qcow2
	I0304 04:27:32.992796   18302 main.go:141] libmachine: STDOUT: 
	I0304 04:27:32.992870   18302 main.go:141] libmachine: STDERR: 
	I0304 04:27:32.992932   18302 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/disk.qcow2 +20000M
	I0304 04:27:33.003847   18302 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:27:33.003888   18302 main.go:141] libmachine: STDERR: 
	I0304 04:27:33.003902   18302 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/disk.qcow2
	I0304 04:27:33.003907   18302 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:27:33.003940   18302 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:b6:df:87:ce:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/enable-default-cni-315000/disk.qcow2
	I0304 04:27:33.005828   18302 main.go:141] libmachine: STDOUT: 
	I0304 04:27:33.005846   18302 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:27:33.005856   18302 client.go:171] LocalClient.Create took 349.626792ms
	I0304 04:27:35.008182   18302 start.go:128] duration metric: createHost completed in 2.380316958s
	I0304 04:27:35.008272   18302 start.go:83] releasing machines lock for "enable-default-cni-315000", held for 2.380626333s
	W0304 04:27:35.008889   18302 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:35.019640   18302 out.go:177] 
	W0304 04:27:35.024699   18302 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:27:35.024748   18302 out.go:239] * 
	* 
	W0304 04:27:35.027535   18302 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:27:35.038553   18302 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.758706792s)

                                                
                                                
-- stdout --
	* [flannel-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-315000 in cluster flannel-315000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:27:37.361688   18416 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:27:37.361798   18416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:27:37.361802   18416 out.go:304] Setting ErrFile to fd 2...
	I0304 04:27:37.361804   18416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:27:37.361939   18416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:27:37.363005   18416 out.go:298] Setting JSON to false
	I0304 04:27:37.379752   18416 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10629,"bootTime":1709544628,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:27:37.379825   18416 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:27:37.385886   18416 out.go:177] * [flannel-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:27:37.392903   18416 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:27:37.396964   18416 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:27:37.392972   18416 notify.go:220] Checking for updates...
	I0304 04:27:37.399829   18416 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:27:37.402903   18416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:27:37.405971   18416 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:27:37.408912   18416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:27:37.412336   18416 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:27:37.412399   18416 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:27:37.412445   18416 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:27:37.416920   18416 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:27:37.423873   18416 start.go:299] selected driver: qemu2
	I0304 04:27:37.423880   18416 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:27:37.423885   18416 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:27:37.426355   18416 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:27:37.428858   18416 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:27:37.431908   18416 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:27:37.431939   18416 cni.go:84] Creating CNI manager for "flannel"
	I0304 04:27:37.431943   18416 start_flags.go:318] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0304 04:27:37.431948   18416 start_flags.go:323] config:
	{Name:flannel-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:27:37.436196   18416 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:27:37.442937   18416 out.go:177] * Starting control plane node flannel-315000 in cluster flannel-315000
	I0304 04:27:37.446893   18416 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:27:37.446909   18416 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:27:37.446920   18416 cache.go:56] Caching tarball of preloaded images
	I0304 04:27:37.446974   18416 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:27:37.446980   18416 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:27:37.447051   18416 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/flannel-315000/config.json ...
	I0304 04:27:37.447064   18416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/flannel-315000/config.json: {Name:mk5967c5e52ec476259a5dd46bf717465fa507eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:27:37.447268   18416 start.go:365] acquiring machines lock for flannel-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:27:37.447300   18416 start.go:369] acquired machines lock for "flannel-315000" in 26.625µs
	I0304 04:27:37.447309   18416 start.go:93] Provisioning new machine with config: &{Name:flannel-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:flannel-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:27:37.447337   18416 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:27:37.454897   18416 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:27:37.469456   18416 start.go:159] libmachine.API.Create for "flannel-315000" (driver="qemu2")
	I0304 04:27:37.469493   18416 client.go:168] LocalClient.Create starting
	I0304 04:27:37.469559   18416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:27:37.469594   18416 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:37.469602   18416 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:37.469644   18416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:27:37.469665   18416 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:37.469673   18416 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:37.470005   18416 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:27:37.610880   18416 main.go:141] libmachine: Creating SSH key...
	I0304 04:27:37.688600   18416 main.go:141] libmachine: Creating Disk image...
	I0304 04:27:37.688606   18416 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:27:37.688792   18416 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/disk.qcow2
	I0304 04:27:37.701081   18416 main.go:141] libmachine: STDOUT: 
	I0304 04:27:37.701116   18416 main.go:141] libmachine: STDERR: 
	I0304 04:27:37.701173   18416 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/disk.qcow2 +20000M
	I0304 04:27:37.712460   18416 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:27:37.712479   18416 main.go:141] libmachine: STDERR: 
	I0304 04:27:37.712491   18416 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/disk.qcow2
	I0304 04:27:37.712496   18416 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:27:37.712524   18416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:f2:1d:ff:2d:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/disk.qcow2
	I0304 04:27:37.714401   18416 main.go:141] libmachine: STDOUT: 
	I0304 04:27:37.714426   18416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:27:37.714447   18416 client.go:171] LocalClient.Create took 244.95075ms
	I0304 04:27:39.716153   18416 start.go:128] duration metric: createHost completed in 2.268808791s
	I0304 04:27:39.716214   18416 start.go:83] releasing machines lock for "flannel-315000", held for 2.268920916s
	W0304 04:27:39.716284   18416 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:39.728922   18416 out.go:177] * Deleting "flannel-315000" in qemu2 ...
	W0304 04:27:39.753126   18416 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:39.753152   18416 start.go:709] Will try again in 5 seconds ...
	I0304 04:27:44.755228   18416 start.go:365] acquiring machines lock for flannel-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:27:44.755360   18416 start.go:369] acquired machines lock for "flannel-315000" in 100.541µs
	I0304 04:27:44.755394   18416 start.go:93] Provisioning new machine with config: &{Name:flannel-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:flannel-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:27:44.755470   18416 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:27:44.759500   18416 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:27:44.775483   18416 start.go:159] libmachine.API.Create for "flannel-315000" (driver="qemu2")
	I0304 04:27:44.775511   18416 client.go:168] LocalClient.Create starting
	I0304 04:27:44.775585   18416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:27:44.775620   18416 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:44.775629   18416 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:44.775663   18416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:27:44.775685   18416 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:44.775692   18416 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:44.775995   18416 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:27:44.916793   18416 main.go:141] libmachine: Creating SSH key...
	I0304 04:27:45.016499   18416 main.go:141] libmachine: Creating Disk image...
	I0304 04:27:45.016506   18416 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:27:45.016704   18416 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/disk.qcow2
	I0304 04:27:45.029315   18416 main.go:141] libmachine: STDOUT: 
	I0304 04:27:45.029341   18416 main.go:141] libmachine: STDERR: 
	I0304 04:27:45.029387   18416 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/disk.qcow2 +20000M
	I0304 04:27:45.040946   18416 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:27:45.040968   18416 main.go:141] libmachine: STDERR: 
	I0304 04:27:45.040985   18416 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/disk.qcow2
	I0304 04:27:45.040991   18416 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:27:45.041020   18416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:00:98:a2:5d:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/flannel-315000/disk.qcow2
	I0304 04:27:45.042951   18416 main.go:141] libmachine: STDOUT: 
	I0304 04:27:45.042970   18416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:27:45.042989   18416 client.go:171] LocalClient.Create took 267.469167ms
	I0304 04:27:47.045189   18416 start.go:128] duration metric: createHost completed in 2.289705084s
	I0304 04:27:47.045270   18416 start.go:83] releasing machines lock for "flannel-315000", held for 2.28991075s
	W0304 04:27:47.045735   18416 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:47.060450   18416 out.go:177] 
	W0304 04:27:47.064549   18416 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:27:47.064578   18416 out.go:239] * 
	* 
	W0304 04:27:47.067153   18416 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:27:47.076797   18416 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.980207334s)

                                                
                                                
-- stdout --
	* [bridge-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-315000 in cluster bridge-315000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:27:49.672295   18548 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:27:49.672430   18548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:27:49.672433   18548 out.go:304] Setting ErrFile to fd 2...
	I0304 04:27:49.672436   18548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:27:49.672570   18548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:27:49.673764   18548 out.go:298] Setting JSON to false
	I0304 04:27:49.690983   18548 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10641,"bootTime":1709544628,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:27:49.691046   18548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:27:49.696546   18548 out.go:177] * [bridge-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:27:49.704465   18548 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:27:49.708494   18548 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:27:49.704527   18548 notify.go:220] Checking for updates...
	I0304 04:27:49.714464   18548 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:27:49.717531   18548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:27:49.718992   18548 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:27:49.722462   18548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:27:49.725789   18548 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:27:49.725855   18548 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:27:49.725907   18548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:27:49.730327   18548 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:27:49.737471   18548 start.go:299] selected driver: qemu2
	I0304 04:27:49.737477   18548 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:27:49.737482   18548 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:27:49.739613   18548 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:27:49.742526   18548 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:27:49.745617   18548 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:27:49.745672   18548 cni.go:84] Creating CNI manager for "bridge"
	I0304 04:27:49.745685   18548 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:27:49.745690   18548 start_flags.go:323] config:
	{Name:bridge-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:27:49.750017   18548 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:27:49.756443   18548 out.go:177] * Starting control plane node bridge-315000 in cluster bridge-315000
	I0304 04:27:49.760457   18548 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:27:49.760469   18548 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:27:49.760474   18548 cache.go:56] Caching tarball of preloaded images
	I0304 04:27:49.760521   18548 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:27:49.760526   18548 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:27:49.760580   18548 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/bridge-315000/config.json ...
	I0304 04:27:49.760591   18548 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/bridge-315000/config.json: {Name:mk1b14b1df202dd3a9c50999c5e85013ff5db44e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:27:49.760791   18548 start.go:365] acquiring machines lock for bridge-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:27:49.760820   18548 start.go:369] acquired machines lock for "bridge-315000" in 24.375µs
	I0304 04:27:49.760830   18548 start.go:93] Provisioning new machine with config: &{Name:bridge-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:bridge-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:27:49.760857   18548 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:27:49.768418   18548 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:27:49.784482   18548 start.go:159] libmachine.API.Create for "bridge-315000" (driver="qemu2")
	I0304 04:27:49.784509   18548 client.go:168] LocalClient.Create starting
	I0304 04:27:49.784575   18548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:27:49.784603   18548 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:49.784613   18548 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:49.784650   18548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:27:49.784673   18548 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:49.784681   18548 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:49.785040   18548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:27:49.925151   18548 main.go:141] libmachine: Creating SSH key...
	I0304 04:27:50.123253   18548 main.go:141] libmachine: Creating Disk image...
	I0304 04:27:50.123263   18548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:27:50.123449   18548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/disk.qcow2
	I0304 04:27:50.135812   18548 main.go:141] libmachine: STDOUT: 
	I0304 04:27:50.135839   18548 main.go:141] libmachine: STDERR: 
	I0304 04:27:50.135886   18548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/disk.qcow2 +20000M
	I0304 04:27:50.147101   18548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:27:50.147117   18548 main.go:141] libmachine: STDERR: 
	I0304 04:27:50.147143   18548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/disk.qcow2
	I0304 04:27:50.147148   18548 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:27:50.147179   18548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a3:3c:0a:9a:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/disk.qcow2
	I0304 04:27:50.149046   18548 main.go:141] libmachine: STDOUT: 
	I0304 04:27:50.149060   18548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:27:50.149093   18548 client.go:171] LocalClient.Create took 364.58ms
	I0304 04:27:52.151328   18548 start.go:128] duration metric: createHost completed in 2.390453167s
	I0304 04:27:52.151422   18548 start.go:83] releasing machines lock for "bridge-315000", held for 2.390609583s
	W0304 04:27:52.151476   18548 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:52.165043   18548 out.go:177] * Deleting "bridge-315000" in qemu2 ...
	W0304 04:27:52.187784   18548 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:52.187817   18548 start.go:709] Will try again in 5 seconds ...
	I0304 04:27:57.189884   18548 start.go:365] acquiring machines lock for bridge-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:27:57.190062   18548 start.go:369] acquired machines lock for "bridge-315000" in 146.542µs
	I0304 04:27:57.190116   18548 start.go:93] Provisioning new machine with config: &{Name:bridge-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:bridge-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:27:57.190200   18548 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:27:57.200524   18548 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:27:57.225703   18548 start.go:159] libmachine.API.Create for "bridge-315000" (driver="qemu2")
	I0304 04:27:57.225747   18548 client.go:168] LocalClient.Create starting
	I0304 04:27:57.225828   18548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:27:57.225882   18548 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:57.225900   18548 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:57.225946   18548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:27:57.225975   18548 main.go:141] libmachine: Decoding PEM data...
	I0304 04:27:57.225986   18548 main.go:141] libmachine: Parsing certificate...
	I0304 04:27:57.226354   18548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:27:57.369941   18548 main.go:141] libmachine: Creating SSH key...
	I0304 04:27:57.553311   18548 main.go:141] libmachine: Creating Disk image...
	I0304 04:27:57.553324   18548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:27:57.553545   18548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/disk.qcow2
	I0304 04:27:57.566492   18548 main.go:141] libmachine: STDOUT: 
	I0304 04:27:57.566515   18548 main.go:141] libmachine: STDERR: 
	I0304 04:27:57.566571   18548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/disk.qcow2 +20000M
	I0304 04:27:57.577469   18548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:27:57.577489   18548 main.go:141] libmachine: STDERR: 
	I0304 04:27:57.577519   18548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/disk.qcow2
	I0304 04:27:57.577525   18548 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:27:57.577557   18548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:4c:3f:53:23:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/bridge-315000/disk.qcow2
	I0304 04:27:57.579324   18548 main.go:141] libmachine: STDOUT: 
	I0304 04:27:57.579385   18548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:27:57.579396   18548 client.go:171] LocalClient.Create took 353.645417ms
	I0304 04:27:59.581595   18548 start.go:128] duration metric: createHost completed in 2.391381416s
	I0304 04:27:59.581673   18548 start.go:83] releasing machines lock for "bridge-315000", held for 2.391613542s
	W0304 04:27:59.582068   18548 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:27:59.592761   18548 out.go:177] 
	W0304 04:27:59.595818   18548 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:27:59.595856   18548 out.go:239] * 
	* 
	W0304 04:27:59.597672   18548 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:27:59.606793   18548 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.763815083s)

                                                
                                                
-- stdout --
	* [kubenet-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-315000 in cluster kubenet-315000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:01.912164   18658 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:01.912305   18658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:01.912308   18658 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:01.912310   18658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:01.912458   18658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:01.913518   18658 out.go:298] Setting JSON to false
	I0304 04:28:01.930095   18658 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10653,"bootTime":1709544628,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:28:01.930165   18658 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:28:01.935081   18658 out.go:177] * [kubenet-315000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:28:01.943215   18658 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:28:01.946151   18658 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:28:01.943266   18658 notify.go:220] Checking for updates...
	I0304 04:28:01.952189   18658 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:28:01.955159   18658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:28:01.958199   18658 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:28:01.961182   18658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:28:01.964623   18658 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:28:01.964690   18658 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:28:01.964749   18658 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:28:01.969126   18658 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:28:01.975147   18658 start.go:299] selected driver: qemu2
	I0304 04:28:01.975153   18658 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:28:01.975159   18658 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:28:01.977642   18658 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:28:01.980206   18658 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:28:01.983290   18658 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:28:01.983323   18658 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0304 04:28:01.983327   18658 start_flags.go:323] config:
	{Name:kubenet-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:01.988036   18658 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:01.995168   18658 out.go:177] * Starting control plane node kubenet-315000 in cluster kubenet-315000
	I0304 04:28:01.999210   18658 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:28:01.999228   18658 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:28:01.999239   18658 cache.go:56] Caching tarball of preloaded images
	I0304 04:28:01.999304   18658 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:28:01.999312   18658 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:28:01.999391   18658 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/kubenet-315000/config.json ...
	I0304 04:28:01.999407   18658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/kubenet-315000/config.json: {Name:mkc9d7b353963481dda19e37e87b52851eda6471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:28:01.999630   18658 start.go:365] acquiring machines lock for kubenet-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:01.999662   18658 start.go:369] acquired machines lock for "kubenet-315000" in 26.459µs
	I0304 04:28:01.999672   18658 start.go:93] Provisioning new machine with config: &{Name:kubenet-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:28:01.999702   18658 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:28:02.007170   18658 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:28:02.023516   18658 start.go:159] libmachine.API.Create for "kubenet-315000" (driver="qemu2")
	I0304 04:28:02.023545   18658 client.go:168] LocalClient.Create starting
	I0304 04:28:02.023614   18658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:28:02.023642   18658 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:02.023655   18658 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:02.023699   18658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:28:02.023723   18658 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:02.023729   18658 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:02.024102   18658 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:28:02.170564   18658 main.go:141] libmachine: Creating SSH key...
	I0304 04:28:02.235385   18658 main.go:141] libmachine: Creating Disk image...
	I0304 04:28:02.235391   18658 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:28:02.235561   18658 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/disk.qcow2
	I0304 04:28:02.248385   18658 main.go:141] libmachine: STDOUT: 
	I0304 04:28:02.248418   18658 main.go:141] libmachine: STDERR: 
	I0304 04:28:02.248478   18658 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/disk.qcow2 +20000M
	I0304 04:28:02.259801   18658 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:28:02.259818   18658 main.go:141] libmachine: STDERR: 
	I0304 04:28:02.259839   18658 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/disk.qcow2
	I0304 04:28:02.259845   18658 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:28:02.259879   18658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:0d:28:4a:68:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/disk.qcow2
	I0304 04:28:02.261866   18658 main.go:141] libmachine: STDOUT: 
	I0304 04:28:02.261889   18658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:02.261904   18658 client.go:171] LocalClient.Create took 238.355083ms
	I0304 04:28:04.262416   18658 start.go:128] duration metric: createHost completed in 2.262709625s
	I0304 04:28:04.262450   18658 start.go:83] releasing machines lock for "kubenet-315000", held for 2.262796041s
	W0304 04:28:04.262475   18658 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:04.275290   18658 out.go:177] * Deleting "kubenet-315000" in qemu2 ...
	W0304 04:28:04.287877   18658 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:04.287896   18658 start.go:709] Will try again in 5 seconds ...
	I0304 04:28:09.289966   18658 start.go:365] acquiring machines lock for kubenet-315000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:09.290139   18658 start.go:369] acquired machines lock for "kubenet-315000" in 133.834µs
	I0304 04:28:09.290181   18658 start.go:93] Provisioning new machine with config: &{Name:kubenet-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-315000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:28:09.290267   18658 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:28:09.298498   18658 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0304 04:28:09.319310   18658 start.go:159] libmachine.API.Create for "kubenet-315000" (driver="qemu2")
	I0304 04:28:09.319349   18658 client.go:168] LocalClient.Create starting
	I0304 04:28:09.319423   18658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:28:09.319458   18658 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:09.319468   18658 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:09.319516   18658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:28:09.319540   18658 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:09.319546   18658 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:09.319926   18658 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:28:09.463552   18658 main.go:141] libmachine: Creating SSH key...
	I0304 04:28:09.579048   18658 main.go:141] libmachine: Creating Disk image...
	I0304 04:28:09.579057   18658 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:28:09.579248   18658 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/disk.qcow2
	I0304 04:28:09.591666   18658 main.go:141] libmachine: STDOUT: 
	I0304 04:28:09.591730   18658 main.go:141] libmachine: STDERR: 
	I0304 04:28:09.591779   18658 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/disk.qcow2 +20000M
	I0304 04:28:09.602752   18658 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:28:09.602774   18658 main.go:141] libmachine: STDERR: 
	I0304 04:28:09.602795   18658 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/disk.qcow2
	I0304 04:28:09.602801   18658 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:28:09.602836   18658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a5:92:bd:c5:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/kubenet-315000/disk.qcow2
	I0304 04:28:09.604600   18658 main.go:141] libmachine: STDOUT: 
	I0304 04:28:09.604645   18658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:09.604661   18658 client.go:171] LocalClient.Create took 285.306416ms
	I0304 04:28:11.606821   18658 start.go:128] duration metric: createHost completed in 2.316543667s
	I0304 04:28:11.606893   18658 start.go:83] releasing machines lock for "kubenet-315000", held for 2.316758208s
	W0304 04:28:11.607164   18658 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:11.616723   18658 out.go:177] 
	W0304 04:28:11.621851   18658 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:11.621873   18658 out.go:239] * 
	* 
	W0304 04:28:11.623209   18658 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:28:11.634698   18658 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-394000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-394000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (11.763567708s)

                                                
                                                
-- stdout --
	* [old-k8s-version-394000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-394000 in cluster old-k8s-version-394000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-394000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:13.878978   18776 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:13.879105   18776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:13.879109   18776 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:13.879111   18776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:13.879238   18776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:13.880339   18776 out.go:298] Setting JSON to false
	I0304 04:28:13.897039   18776 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10665,"bootTime":1709544628,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:28:13.897122   18776 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:28:13.903270   18776 out.go:177] * [old-k8s-version-394000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:28:13.908901   18776 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:28:13.913242   18776 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:28:13.908934   18776 notify.go:220] Checking for updates...
	I0304 04:28:13.917661   18776 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:28:13.920266   18776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:28:13.923263   18776 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:28:13.926239   18776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:28:13.929559   18776 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:28:13.929632   18776 config.go:182] Loaded profile config "stopped-upgrade-289000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0304 04:28:13.929688   18776 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:28:13.934242   18776 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:28:13.941196   18776 start.go:299] selected driver: qemu2
	I0304 04:28:13.941201   18776 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:28:13.941206   18776 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:28:13.943354   18776 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:28:13.946245   18776 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:28:13.949393   18776 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:28:13.949458   18776 cni.go:84] Creating CNI manager for ""
	I0304 04:28:13.949464   18776 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0304 04:28:13.949468   18776 start_flags.go:323] config:
	{Name:old-k8s-version-394000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-394000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:13.953625   18776 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:13.961226   18776 out.go:177] * Starting control plane node old-k8s-version-394000 in cluster old-k8s-version-394000
	I0304 04:28:13.964172   18776 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0304 04:28:13.964188   18776 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0304 04:28:13.964202   18776 cache.go:56] Caching tarball of preloaded images
	I0304 04:28:13.964265   18776 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:28:13.964272   18776 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0304 04:28:13.964340   18776 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/old-k8s-version-394000/config.json ...
	I0304 04:28:13.964350   18776 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/old-k8s-version-394000/config.json: {Name:mk37d8bd5a6d5f53f9b5a908dca6b37a28aa1168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:28:13.964560   18776 start.go:365] acquiring machines lock for old-k8s-version-394000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:13.964587   18776 start.go:369] acquired machines lock for "old-k8s-version-394000" in 22.291µs
	I0304 04:28:13.964597   18776 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-394000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:28:13.964636   18776 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:28:13.973114   18776 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:28:13.987736   18776 start.go:159] libmachine.API.Create for "old-k8s-version-394000" (driver="qemu2")
	I0304 04:28:13.987758   18776 client.go:168] LocalClient.Create starting
	I0304 04:28:13.987814   18776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:28:13.987843   18776 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:13.987855   18776 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:13.987893   18776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:28:13.987914   18776 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:13.987920   18776 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:13.988249   18776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:28:14.130482   18776 main.go:141] libmachine: Creating SSH key...
	I0304 04:28:14.252756   18776 main.go:141] libmachine: Creating Disk image...
	I0304 04:28:14.252766   18776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:28:14.252955   18776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2
	I0304 04:28:14.265868   18776 main.go:141] libmachine: STDOUT: 
	I0304 04:28:14.265888   18776 main.go:141] libmachine: STDERR: 
	I0304 04:28:14.265941   18776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2 +20000M
	I0304 04:28:14.276806   18776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:28:14.276821   18776 main.go:141] libmachine: STDERR: 
	I0304 04:28:14.276841   18776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2
	I0304 04:28:14.276845   18776 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:28:14.276878   18776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:f2:7c:60:5b:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2
	I0304 04:28:14.278599   18776 main.go:141] libmachine: STDOUT: 
	I0304 04:28:14.278615   18776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:14.278637   18776 client.go:171] LocalClient.Create took 290.876125ms
	I0304 04:28:16.279791   18776 start.go:128] duration metric: createHost completed in 2.315143791s
	I0304 04:28:16.279878   18776 start.go:83] releasing machines lock for "old-k8s-version-394000", held for 2.315293583s
	W0304 04:28:16.279932   18776 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:16.290749   18776 out.go:177] * Deleting "old-k8s-version-394000" in qemu2 ...
	W0304 04:28:16.319535   18776 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:16.319569   18776 start.go:709] Will try again in 5 seconds ...
	I0304 04:28:21.321690   18776 start.go:365] acquiring machines lock for old-k8s-version-394000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:23.257271   18776 start.go:369] acquired machines lock for "old-k8s-version-394000" in 1.935523792s
	I0304 04:28:23.257406   18776 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-394000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:28:23.257798   18776 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:28:23.267618   18776 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:28:23.318404   18776 start.go:159] libmachine.API.Create for "old-k8s-version-394000" (driver="qemu2")
	I0304 04:28:23.318458   18776 client.go:168] LocalClient.Create starting
	I0304 04:28:23.318598   18776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:28:23.318668   18776 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:23.318691   18776 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:23.318785   18776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:28:23.318827   18776 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:23.318843   18776 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:23.319393   18776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:28:23.473069   18776 main.go:141] libmachine: Creating SSH key...
	I0304 04:28:23.532646   18776 main.go:141] libmachine: Creating Disk image...
	I0304 04:28:23.532652   18776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:28:23.532830   18776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2
	I0304 04:28:23.545371   18776 main.go:141] libmachine: STDOUT: 
	I0304 04:28:23.545393   18776 main.go:141] libmachine: STDERR: 
	I0304 04:28:23.545453   18776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2 +20000M
	I0304 04:28:23.556297   18776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:28:23.556316   18776 main.go:141] libmachine: STDERR: 
	I0304 04:28:23.556327   18776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2
	I0304 04:28:23.556335   18776 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:28:23.556380   18776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:df:ba:df:48:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2
	I0304 04:28:23.558202   18776 main.go:141] libmachine: STDOUT: 
	I0304 04:28:23.558218   18776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:23.558233   18776 client.go:171] LocalClient.Create took 239.768625ms
	I0304 04:28:25.558699   18776 start.go:128] duration metric: createHost completed in 2.300848709s
	I0304 04:28:25.558796   18776 start.go:83] releasing machines lock for "old-k8s-version-394000", held for 2.3014725s
	W0304 04:28:25.559102   18776 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:25.578385   18776 out.go:177] 
	W0304 04:28:25.585043   18776 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:25.585075   18776 out.go:239] * 
	* 
	W0304 04:28:25.587663   18776 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:28:25.597661   18776 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-394000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000: exit status 7 (66.671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (11.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-155000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-155000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.855183459s)

                                                
                                                
-- stdout --
	* [no-preload-155000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-155000 in cluster no-preload-155000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-155000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:20.875399   18794 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:20.875528   18794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:20.875532   18794 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:20.875534   18794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:20.875664   18794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:20.876648   18794 out.go:298] Setting JSON to false
	I0304 04:28:20.893085   18794 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10672,"bootTime":1709544628,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:28:20.893175   18794 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:28:20.896734   18794 out.go:177] * [no-preload-155000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:28:20.904693   18794 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:28:20.908669   18794 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:28:20.904786   18794 notify.go:220] Checking for updates...
	I0304 04:28:20.911616   18794 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:28:20.914678   18794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:28:20.917705   18794 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:28:20.918993   18794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:28:20.922107   18794 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:28:20.922191   18794 config.go:182] Loaded profile config "old-k8s-version-394000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0304 04:28:20.922236   18794 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:28:20.926714   18794 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:28:20.931655   18794 start.go:299] selected driver: qemu2
	I0304 04:28:20.931663   18794 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:28:20.931670   18794 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:28:20.933960   18794 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:28:20.936710   18794 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:28:20.939818   18794 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:28:20.939877   18794 cni.go:84] Creating CNI manager for ""
	I0304 04:28:20.939884   18794 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:28:20.939889   18794 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:28:20.939895   18794 start_flags.go:323] config:
	{Name:no-preload-155000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-155000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:20.944532   18794 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:20.951689   18794 out.go:177] * Starting control plane node no-preload-155000 in cluster no-preload-155000
	I0304 04:28:20.955608   18794 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0304 04:28:20.955701   18794 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/no-preload-155000/config.json ...
	I0304 04:28:20.955733   18794 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/no-preload-155000/config.json: {Name:mk36ca0779f1edbbd629c16b2e1e8eb48f265f31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:28:20.955748   18794 cache.go:107] acquiring lock: {Name:mk7f58029d9b549ed1b53d9ce985d3e0b0f5f3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:20.955752   18794 cache.go:107] acquiring lock: {Name:mk81aa7944e03a923dcba1b84febfdc8d1dc6c81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:20.955818   18794 cache.go:107] acquiring lock: {Name:mk32a691ff3342fd246bc3070b890968b050faa5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:20.955855   18794 cache.go:115] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0304 04:28:20.955861   18794 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 114.875µs
	I0304 04:28:20.955869   18794 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0304 04:28:20.955875   18794 cache.go:107] acquiring lock: {Name:mk418e03d18efef5d707693a9e6136f9f343acb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:20.955888   18794 cache.go:107] acquiring lock: {Name:mk1f34424242566283322731af54f08f1fb3f2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:20.955967   18794 start.go:365] acquiring machines lock for no-preload-155000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:20.955957   18794 cache.go:107] acquiring lock: {Name:mk3d9850ac712d86460bf46d9b50082fe887f4da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:20.955921   18794 cache.go:107] acquiring lock: {Name:mkeece92f977da3fde325ea7e5181c94d8670a8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:20.956000   18794 start.go:369] acquired machines lock for "no-preload-155000" in 26.875µs
	I0304 04:28:20.956009   18794 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0304 04:28:20.956010   18794 start.go:93] Provisioning new machine with config: &{Name:no-preload-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-155000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:28:20.956063   18794 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:28:20.955974   18794 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0304 04:28:20.956105   18794 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0304 04:28:20.956107   18794 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0304 04:28:20.955963   18794 cache.go:107] acquiring lock: {Name:mk724d120bfe5b20a8a707b65b891692983dcf63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:20.956189   18794 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0304 04:28:20.963733   18794 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:28:20.956463   18794 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0304 04:28:20.956524   18794 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0304 04:28:20.966531   18794 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0304 04:28:20.966626   18794 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0304 04:28:20.967186   18794 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0304 04:28:20.967368   18794 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0304 04:28:20.969149   18794 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0304 04:28:20.969165   18794 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0304 04:28:20.969229   18794 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0304 04:28:20.981602   18794 start.go:159] libmachine.API.Create for "no-preload-155000" (driver="qemu2")
	I0304 04:28:20.981625   18794 client.go:168] LocalClient.Create starting
	I0304 04:28:20.981714   18794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:28:20.981757   18794 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:20.981771   18794 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:20.981813   18794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:28:20.981837   18794 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:20.981846   18794 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:20.982244   18794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:28:21.130248   18794 main.go:141] libmachine: Creating SSH key...
	I0304 04:28:21.229232   18794 main.go:141] libmachine: Creating Disk image...
	I0304 04:28:21.229282   18794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:28:21.229498   18794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2
	I0304 04:28:21.242592   18794 main.go:141] libmachine: STDOUT: 
	I0304 04:28:21.242612   18794 main.go:141] libmachine: STDERR: 
	I0304 04:28:21.242677   18794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2 +20000M
	I0304 04:28:21.254336   18794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:28:21.254354   18794 main.go:141] libmachine: STDERR: 
	I0304 04:28:21.254366   18794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2
	I0304 04:28:21.254370   18794 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:28:21.254400   18794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:08:87:64:2a:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2
	I0304 04:28:21.256335   18794 main.go:141] libmachine: STDOUT: 
	I0304 04:28:21.256353   18794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:21.256371   18794 client.go:171] LocalClient.Create took 274.741458ms
	I0304 04:28:22.873777   18794 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0304 04:28:22.987217   18794 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0304 04:28:23.002777   18794 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0304 04:28:23.007078   18794 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0304 04:28:23.018370   18794 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0304 04:28:23.025635   18794 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0304 04:28:23.030197   18794 cache.go:162] opening:  /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0304 04:28:23.152991   18794 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0304 04:28:23.153042   18794 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.197165958s
	I0304 04:28:23.153067   18794 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0304 04:28:23.257152   18794 start.go:128] duration metric: createHost completed in 2.301087209s
	I0304 04:28:23.257188   18794 start.go:83] releasing machines lock for "no-preload-155000", held for 2.301195667s
	W0304 04:28:23.257237   18794 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:23.275508   18794 out.go:177] * Deleting "no-preload-155000" in qemu2 ...
	W0304 04:28:23.298334   18794 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:23.298367   18794 start.go:709] Will try again in 5 seconds ...
	I0304 04:28:25.204809   18794 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0304 04:28:25.204860   18794 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.249004375s
	I0304 04:28:25.204887   18794 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0304 04:28:26.659398   18794 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0304 04:28:26.659462   18794 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 5.703745125s
	I0304 04:28:26.659490   18794 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0304 04:28:27.385123   18794 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0304 04:28:27.385217   18794 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 6.429359875s
	I0304 04:28:27.385244   18794 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0304 04:28:27.794830   18794 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0304 04:28:27.794897   18794 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 6.839178959s
	I0304 04:28:27.794925   18794 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0304 04:28:27.990837   18794 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0304 04:28:27.990876   18794 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 7.035046833s
	I0304 04:28:27.990898   18794 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0304 04:28:28.298547   18794 start.go:365] acquiring machines lock for no-preload-155000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:28.298929   18794 start.go:369] acquired machines lock for "no-preload-155000" in 291.458µs
	I0304 04:28:28.299002   18794 start.go:93] Provisioning new machine with config: &{Name:no-preload-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-155000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:28:28.299279   18794 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:28:28.309904   18794 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:28:28.356990   18794 start.go:159] libmachine.API.Create for "no-preload-155000" (driver="qemu2")
	I0304 04:28:28.357034   18794 client.go:168] LocalClient.Create starting
	I0304 04:28:28.357128   18794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:28:28.357181   18794 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:28.357214   18794 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:28.357280   18794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:28:28.357309   18794 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:28.357324   18794 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:28.357857   18794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:28:28.510981   18794 main.go:141] libmachine: Creating SSH key...
	I0304 04:28:28.621410   18794 main.go:141] libmachine: Creating Disk image...
	I0304 04:28:28.621417   18794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:28:28.621582   18794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2
	I0304 04:28:28.634181   18794 main.go:141] libmachine: STDOUT: 
	I0304 04:28:28.634207   18794 main.go:141] libmachine: STDERR: 
	I0304 04:28:28.634258   18794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2 +20000M
	I0304 04:28:28.645279   18794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:28:28.645298   18794 main.go:141] libmachine: STDERR: 
	I0304 04:28:28.645313   18794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2
	I0304 04:28:28.645319   18794 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:28:28.645357   18794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:ab:12:aa:ff:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2
	I0304 04:28:28.647164   18794 main.go:141] libmachine: STDOUT: 
	I0304 04:28:28.647186   18794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:28.647199   18794 client.go:171] LocalClient.Create took 290.162792ms
	I0304 04:28:30.373674   18794 cache.go:157] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0304 04:28:30.373766   18794 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 9.417911708s
	I0304 04:28:30.373830   18794 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0304 04:28:30.373887   18794 cache.go:87] Successfully saved all images to host disk.
	I0304 04:28:30.649433   18794 start.go:128] duration metric: createHost completed in 2.350135375s
	I0304 04:28:30.649521   18794 start.go:83] releasing machines lock for "no-preload-155000", held for 2.3505505s
	W0304 04:28:30.649818   18794 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:30.664404   18794 out.go:177] 
	W0304 04:28:30.668456   18794 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:30.668504   18794 out.go:239] * 
	* 
	W0304 04:28:30.670414   18794 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:28:30.682266   18794 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-155000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000: exit status 7 (65.509083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-394000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-394000 create -f testdata/busybox.yaml: exit status 1 (30.045459ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-394000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-394000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000: exit status 7 (30.602625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-394000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000: exit status 7 (30.842417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-394000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-394000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-394000 describe deploy/metrics-server -n kube-system: exit status 1 (27.074209ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-394000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-394000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000: exit status 7 (30.791125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-394000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-394000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.204016417s)

                                                
                                                
-- stdout --
	* [old-k8s-version-394000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-394000 in cluster old-k8s-version-394000
	* Restarting existing qemu2 VM for "old-k8s-version-394000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-394000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:26.081233   18858 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:26.081349   18858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:26.081353   18858 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:26.081356   18858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:26.081490   18858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:26.082592   18858 out.go:298] Setting JSON to false
	I0304 04:28:26.098966   18858 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10678,"bootTime":1709544628,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:28:26.099031   18858 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:28:26.102790   18858 out.go:177] * [old-k8s-version-394000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:28:26.113730   18858 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:28:26.109891   18858 notify.go:220] Checking for updates...
	I0304 04:28:26.119398   18858 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:28:26.125786   18858 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:28:26.132812   18858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:28:26.138236   18858 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:28:26.144829   18858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:28:26.149049   18858 config.go:182] Loaded profile config "old-k8s-version-394000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0304 04:28:26.153799   18858 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0304 04:28:26.157845   18858 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:28:26.161831   18858 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:28:26.168867   18858 start.go:299] selected driver: qemu2
	I0304 04:28:26.168876   18858 start.go:903] validating driver "qemu2" against &{Name:old-k8s-version-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-394000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:26.168947   18858 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:28:26.171500   18858 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:28:26.171551   18858 cni.go:84] Creating CNI manager for ""
	I0304 04:28:26.171561   18858 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0304 04:28:26.171572   18858 start_flags.go:323] config:
	{Name:old-k8s-version-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-394000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:26.176486   18858 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:26.184815   18858 out.go:177] * Starting control plane node old-k8s-version-394000 in cluster old-k8s-version-394000
	I0304 04:28:26.190799   18858 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0304 04:28:26.190814   18858 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0304 04:28:26.190822   18858 cache.go:56] Caching tarball of preloaded images
	I0304 04:28:26.190877   18858 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:28:26.190883   18858 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0304 04:28:26.190964   18858 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/old-k8s-version-394000/config.json ...
	I0304 04:28:26.191340   18858 start.go:365] acquiring machines lock for old-k8s-version-394000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:26.191375   18858 start.go:369] acquired machines lock for "old-k8s-version-394000" in 28.583µs
	I0304 04:28:26.191385   18858 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:28:26.191389   18858 fix.go:54] fixHost starting: 
	I0304 04:28:26.191514   18858 fix.go:102] recreateIfNeeded on old-k8s-version-394000: state=Stopped err=<nil>
	W0304 04:28:26.191523   18858 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:28:26.194770   18858 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-394000" ...
	I0304 04:28:26.202872   18858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:df:ba:df:48:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2
	I0304 04:28:26.205087   18858 main.go:141] libmachine: STDOUT: 
	I0304 04:28:26.205110   18858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:26.205140   18858 fix.go:56] fixHost completed within 13.749334ms
	I0304 04:28:26.205145   18858 start.go:83] releasing machines lock for "old-k8s-version-394000", held for 13.764541ms
	W0304 04:28:26.205153   18858 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:26.205195   18858 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:26.205201   18858 start.go:709] Will try again in 5 seconds ...
	I0304 04:28:31.207232   18858 start.go:365] acquiring machines lock for old-k8s-version-394000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:31.207315   18858 start.go:369] acquired machines lock for "old-k8s-version-394000" in 59.334µs
	I0304 04:28:31.207338   18858 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:28:31.207343   18858 fix.go:54] fixHost starting: 
	I0304 04:28:31.207473   18858 fix.go:102] recreateIfNeeded on old-k8s-version-394000: state=Stopped err=<nil>
	W0304 04:28:31.207478   18858 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:28:31.212822   18858 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-394000" ...
	I0304 04:28:31.219856   18858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:df:ba:df:48:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/old-k8s-version-394000/disk.qcow2
	I0304 04:28:31.221961   18858 main.go:141] libmachine: STDOUT: 
	I0304 04:28:31.221977   18858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:31.221997   18858 fix.go:56] fixHost completed within 14.654583ms
	I0304 04:28:31.222000   18858 start.go:83] releasing machines lock for "old-k8s-version-394000", held for 14.680292ms
	W0304 04:28:31.222041   18858 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-394000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-394000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:31.229738   18858 out.go:177] 
	W0304 04:28:31.233859   18858 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:31.233874   18858 out.go:239] * 
	* 
	W0304 04:28:31.234394   18858 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:28:31.240819   18858 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-394000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000: exit status 7 (38.559209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-155000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-155000 create -f testdata/busybox.yaml: exit status 1 (30.075083ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-155000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-155000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000: exit status 7 (30.717666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-155000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000: exit status 7 (30.7765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-155000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-155000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-155000 describe deploy/metrics-server -n kube-system: exit status 1 (27.140334ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-155000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-155000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000: exit status 7 (31.291541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-155000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-155000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.223076958s)

                                                
                                                
-- stdout --
	* [no-preload-155000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-155000 in cluster no-preload-155000
	* Restarting existing qemu2 VM for "no-preload-155000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-155000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:31.169314   18887 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:31.169445   18887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:31.169448   18887 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:31.169450   18887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:31.169588   18887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:31.170656   18887 out.go:298] Setting JSON to false
	I0304 04:28:31.186953   18887 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10683,"bootTime":1709544628,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:28:31.187016   18887 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:28:31.190913   18887 out.go:177] * [no-preload-155000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:28:31.197926   18887 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:28:31.202862   18887 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:28:31.197962   18887 notify.go:220] Checking for updates...
	I0304 04:28:31.205880   18887 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:28:31.212816   18887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:28:31.219840   18887 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:28:31.229743   18887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:28:31.234190   18887 config.go:182] Loaded profile config "no-preload-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0304 04:28:31.234489   18887 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:28:31.247729   18887 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:28:31.259765   18887 start.go:299] selected driver: qemu2
	I0304 04:28:31.259779   18887 start.go:903] validating driver "qemu2" against &{Name:no-preload-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-155000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:31.259840   18887 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:28:31.262660   18887 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:28:31.262721   18887 cni.go:84] Creating CNI manager for ""
	I0304 04:28:31.262729   18887 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:28:31.262734   18887 start_flags.go:323] config:
	{Name:no-preload-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-155000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:31.267719   18887 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:31.275692   18887 out.go:177] * Starting control plane node no-preload-155000 in cluster no-preload-155000
	I0304 04:28:31.279797   18887 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0304 04:28:31.279908   18887 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/no-preload-155000/config.json ...
	I0304 04:28:31.279922   18887 cache.go:107] acquiring lock: {Name:mk7f58029d9b549ed1b53d9ce985d3e0b0f5f3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:31.279962   18887 cache.go:107] acquiring lock: {Name:mk81aa7944e03a923dcba1b84febfdc8d1dc6c81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:31.280019   18887 cache.go:115] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0304 04:28:31.280031   18887 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 114.042µs
	I0304 04:28:31.280036   18887 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0304 04:28:31.280041   18887 cache.go:107] acquiring lock: {Name:mk418e03d18efef5d707693a9e6136f9f343acb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:31.280029   18887 cache.go:107] acquiring lock: {Name:mk1f34424242566283322731af54f08f1fb3f2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:31.280063   18887 cache.go:115] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0304 04:28:31.280078   18887 cache.go:115] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0304 04:28:31.280051   18887 cache.go:107] acquiring lock: {Name:mkeece92f977da3fde325ea7e5181c94d8670a8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:31.280087   18887 cache.go:107] acquiring lock: {Name:mk3d9850ac712d86460bf46d9b50082fe887f4da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:31.280083   18887 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 42.125µs
	I0304 04:28:31.280135   18887 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0304 04:28:31.280070   18887 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 147.75µs
	I0304 04:28:31.280139   18887 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0304 04:28:31.280099   18887 cache.go:107] acquiring lock: {Name:mk724d120bfe5b20a8a707b65b891692983dcf63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:31.280157   18887 cache.go:107] acquiring lock: {Name:mk32a691ff3342fd246bc3070b890968b050faa5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:31.280159   18887 cache.go:115] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0304 04:28:31.280184   18887 cache.go:115] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0304 04:28:31.280177   18887 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 88.625µs
	I0304 04:28:31.280201   18887 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0304 04:28:31.280189   18887 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 147.167µs
	I0304 04:28:31.280236   18887 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0304 04:28:31.280206   18887 cache.go:115] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0304 04:28:31.280245   18887 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 244µs
	I0304 04:28:31.280249   18887 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0304 04:28:31.280209   18887 cache.go:115] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0304 04:28:31.280217   18887 cache.go:115] /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0304 04:28:31.280279   18887 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 289.875µs
	I0304 04:28:31.280283   18887 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0304 04:28:31.280285   18887 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 281.458µs
	I0304 04:28:31.280290   18887 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0304 04:28:31.280293   18887 cache.go:87] Successfully saved all images to host disk.
	I0304 04:28:31.280407   18887 start.go:365] acquiring machines lock for no-preload-155000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:31.280438   18887 start.go:369] acquired machines lock for "no-preload-155000" in 24.917µs
	I0304 04:28:31.280448   18887 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:28:31.280452   18887 fix.go:54] fixHost starting: 
	I0304 04:28:31.280569   18887 fix.go:102] recreateIfNeeded on no-preload-155000: state=Stopped err=<nil>
	W0304 04:28:31.280577   18887 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:28:31.284767   18887 out.go:177] * Restarting existing qemu2 VM for "no-preload-155000" ...
	I0304 04:28:31.287856   18887 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:ab:12:aa:ff:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2
	I0304 04:28:31.289727   18887 main.go:141] libmachine: STDOUT: 
	I0304 04:28:31.289750   18887 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:31.289779   18887 fix.go:56] fixHost completed within 9.32575ms
	I0304 04:28:31.289787   18887 start.go:83] releasing machines lock for "no-preload-155000", held for 9.340792ms
	W0304 04:28:31.289796   18887 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:31.289832   18887 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:31.289837   18887 start.go:709] Will try again in 5 seconds ...
	I0304 04:28:36.292019   18887 start.go:365] acquiring machines lock for no-preload-155000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:36.292479   18887 start.go:369] acquired machines lock for "no-preload-155000" in 353.458µs
	I0304 04:28:36.292609   18887 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:28:36.292634   18887 fix.go:54] fixHost starting: 
	I0304 04:28:36.293341   18887 fix.go:102] recreateIfNeeded on no-preload-155000: state=Stopped err=<nil>
	W0304 04:28:36.293368   18887 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:28:36.298966   18887 out.go:177] * Restarting existing qemu2 VM for "no-preload-155000" ...
	I0304 04:28:36.316169   18887 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:ab:12:aa:ff:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/no-preload-155000/disk.qcow2
	I0304 04:28:36.326221   18887 main.go:141] libmachine: STDOUT: 
	I0304 04:28:36.326287   18887 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:36.326372   18887 fix.go:56] fixHost completed within 33.7435ms
	I0304 04:28:36.326388   18887 start.go:83] releasing machines lock for "no-preload-155000", held for 33.887167ms
	W0304 04:28:36.326570   18887 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-155000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-155000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:36.333009   18887 out.go:177] 
	W0304 04:28:36.336883   18887 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:36.336914   18887 out.go:239] * 
	* 
	W0304 04:28:36.339684   18887 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:28:36.347909   18887 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-155000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000: exit status 7 (69.899833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-394000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000: exit status 7 (30.527917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-394000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-394000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-394000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.969084ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-394000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-394000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000: exit status 7 (30.910959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-394000 image list --format=json
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000: exit status 7 (30.689709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-394000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-394000 --alsologtostderr -v=1: exit status 89 (43.804792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-394000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:31.485407   18906 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:31.485759   18906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:31.485764   18906 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:31.485766   18906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:31.485901   18906 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:31.486089   18906 out.go:298] Setting JSON to false
	I0304 04:28:31.486097   18906 mustload.go:65] Loading cluster: old-k8s-version-394000
	I0304 04:28:31.486272   18906 config.go:182] Loaded profile config "old-k8s-version-394000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0304 04:28:31.491597   18906 out.go:177] * The control plane node must be running for this command
	I0304 04:28:31.495653   18906 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-394000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-394000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000: exit status 7 (31.1525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-394000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000: exit status 7 (30.201709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-159000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-159000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (10.19751725s)

                                                
                                                
-- stdout --
	* [embed-certs-159000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-159000 in cluster embed-certs-159000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-159000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:31.960916   18931 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:31.961029   18931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:31.961032   18931 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:31.961034   18931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:31.961173   18931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:31.962245   18931 out.go:298] Setting JSON to false
	I0304 04:28:31.978372   18931 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10683,"bootTime":1709544628,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:28:31.978453   18931 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:28:31.982343   18931 out.go:177] * [embed-certs-159000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:28:31.987222   18931 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:28:31.991143   18931 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:28:31.987310   18931 notify.go:220] Checking for updates...
	I0304 04:28:31.997181   18931 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:28:32.000159   18931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:28:32.003163   18931 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:28:32.006191   18931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:28:32.009523   18931 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:28:32.009594   18931 config.go:182] Loaded profile config "no-preload-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0304 04:28:32.009641   18931 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:28:32.013026   18931 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:28:32.020103   18931 start.go:299] selected driver: qemu2
	I0304 04:28:32.020109   18931 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:28:32.020114   18931 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:28:32.022453   18931 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:28:32.024091   18931 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:28:32.027242   18931 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:28:32.027280   18931 cni.go:84] Creating CNI manager for ""
	I0304 04:28:32.027287   18931 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:28:32.027294   18931 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:28:32.027306   18931 start_flags.go:323] config:
	{Name:embed-certs-159000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-159000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:32.031792   18931 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:32.040103   18931 out.go:177] * Starting control plane node embed-certs-159000 in cluster embed-certs-159000
	I0304 04:28:32.044106   18931 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:28:32.044123   18931 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:28:32.044134   18931 cache.go:56] Caching tarball of preloaded images
	I0304 04:28:32.044195   18931 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:28:32.044201   18931 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:28:32.044283   18931 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/embed-certs-159000/config.json ...
	I0304 04:28:32.044298   18931 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/embed-certs-159000/config.json: {Name:mk93a530e0230e70d416ca48dfbaac05742a7673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:28:32.044511   18931 start.go:365] acquiring machines lock for embed-certs-159000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:32.044546   18931 start.go:369] acquired machines lock for "embed-certs-159000" in 27.458µs
	I0304 04:28:32.044558   18931 start.go:93] Provisioning new machine with config: &{Name:embed-certs-159000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-159000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:28:32.044595   18931 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:28:32.052167   18931 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:28:32.070014   18931 start.go:159] libmachine.API.Create for "embed-certs-159000" (driver="qemu2")
	I0304 04:28:32.070048   18931 client.go:168] LocalClient.Create starting
	I0304 04:28:32.070136   18931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:28:32.070167   18931 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:32.070178   18931 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:32.070224   18931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:28:32.070249   18931 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:32.070263   18931 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:32.070628   18931 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:28:32.213989   18931 main.go:141] libmachine: Creating SSH key...
	I0304 04:28:32.297205   18931 main.go:141] libmachine: Creating Disk image...
	I0304 04:28:32.297210   18931 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:28:32.297398   18931 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2
	I0304 04:28:32.309948   18931 main.go:141] libmachine: STDOUT: 
	I0304 04:28:32.309965   18931 main.go:141] libmachine: STDERR: 
	I0304 04:28:32.310020   18931 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2 +20000M
	I0304 04:28:32.320811   18931 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:28:32.320829   18931 main.go:141] libmachine: STDERR: 
	I0304 04:28:32.320861   18931 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2
	I0304 04:28:32.320865   18931 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:28:32.320898   18931 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c1:46:5a:7d:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2
	I0304 04:28:32.322719   18931 main.go:141] libmachine: STDOUT: 
	I0304 04:28:32.322736   18931 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:32.322753   18931 client.go:171] LocalClient.Create took 252.701541ms
	I0304 04:28:34.324991   18931 start.go:128] duration metric: createHost completed in 2.280378584s
	I0304 04:28:34.325075   18931 start.go:83] releasing machines lock for "embed-certs-159000", held for 2.280532208s
	W0304 04:28:34.325151   18931 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:34.336123   18931 out.go:177] * Deleting "embed-certs-159000" in qemu2 ...
	W0304 04:28:34.367820   18931 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:34.367859   18931 start.go:709] Will try again in 5 seconds ...
	I0304 04:28:39.370025   18931 start.go:365] acquiring machines lock for embed-certs-159000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:39.696399   18931 start.go:369] acquired machines lock for "embed-certs-159000" in 326.268417ms
	I0304 04:28:39.696543   18931 start.go:93] Provisioning new machine with config: &{Name:embed-certs-159000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-159000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:28:39.696852   18931 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:28:39.701483   18931 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:28:39.751415   18931 start.go:159] libmachine.API.Create for "embed-certs-159000" (driver="qemu2")
	I0304 04:28:39.751463   18931 client.go:168] LocalClient.Create starting
	I0304 04:28:39.751618   18931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:28:39.751694   18931 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:39.751710   18931 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:39.751767   18931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:28:39.751809   18931 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:39.751822   18931 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:39.752313   18931 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:28:39.910251   18931 main.go:141] libmachine: Creating SSH key...
	I0304 04:28:40.041068   18931 main.go:141] libmachine: Creating Disk image...
	I0304 04:28:40.041078   18931 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:28:40.041274   18931 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2
	I0304 04:28:40.054100   18931 main.go:141] libmachine: STDOUT: 
	I0304 04:28:40.054120   18931 main.go:141] libmachine: STDERR: 
	I0304 04:28:40.054172   18931 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2 +20000M
	I0304 04:28:40.064767   18931 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:28:40.064783   18931 main.go:141] libmachine: STDERR: 
	I0304 04:28:40.064792   18931 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2
	I0304 04:28:40.064796   18931 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:28:40.064826   18931 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:26:5f:67:92:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2
	I0304 04:28:40.066662   18931 main.go:141] libmachine: STDOUT: 
	I0304 04:28:40.066679   18931 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:40.066690   18931 client.go:171] LocalClient.Create took 315.223083ms
	I0304 04:28:42.068821   18931 start.go:128] duration metric: createHost completed in 2.371953292s
	I0304 04:28:42.068886   18931 start.go:83] releasing machines lock for "embed-certs-159000", held for 2.372460292s
	W0304 04:28:42.069197   18931 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:42.093908   18931 out.go:177] 
	W0304 04:28:42.102924   18931 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:42.102950   18931 out.go:239] * 
	* 
	W0304 04:28:42.105843   18931 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:28:42.117851   18931 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-159000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000: exit status 7 (72.412708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-159000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-155000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000: exit status 7 (34.070291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-155000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-155000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-155000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.958166ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-155000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-155000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000: exit status 7 (31.240167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-155000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000: exit status 7 (30.335375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-155000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-155000 --alsologtostderr -v=1: exit status 89 (43.123958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-155000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:36.629160   18953 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:36.629292   18953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:36.629295   18953 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:36.629298   18953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:36.629423   18953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:36.629634   18953 out.go:298] Setting JSON to false
	I0304 04:28:36.629643   18953 mustload.go:65] Loading cluster: no-preload-155000
	I0304 04:28:36.629833   18953 config.go:182] Loaded profile config "no-preload-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0304 04:28:36.634144   18953 out.go:177] * The control plane node must be running for this command
	I0304 04:28:36.638364   18953 out.go:177]   To start a cluster, run: "minikube start -p no-preload-155000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-155000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000: exit status 7 (30.950583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-155000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000: exit status 7 (30.877708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-155000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-254000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-254000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.795589042s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-254000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-254000 in cluster default-k8s-diff-port-254000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-254000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:37.338914   18988 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:37.339195   18988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:37.339202   18988 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:37.339205   18988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:37.339460   18988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:37.340841   18988 out.go:298] Setting JSON to false
	I0304 04:28:37.357323   18988 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10689,"bootTime":1709544628,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:28:37.357380   18988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:28:37.361404   18988 out.go:177] * [default-k8s-diff-port-254000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:28:37.370237   18988 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:28:37.370317   18988 notify.go:220] Checking for updates...
	I0304 04:28:37.376179   18988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:28:37.379298   18988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:28:37.380819   18988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:28:37.384214   18988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:28:37.387206   18988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:28:37.390543   18988 config.go:182] Loaded profile config "embed-certs-159000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:28:37.390616   18988 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:28:37.390668   18988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:28:37.395137   18988 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:28:37.402217   18988 start.go:299] selected driver: qemu2
	I0304 04:28:37.402221   18988 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:28:37.402227   18988 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:28:37.404539   18988 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:28:37.408264   18988 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:28:37.411343   18988 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:28:37.411384   18988 cni.go:84] Creating CNI manager for ""
	I0304 04:28:37.411392   18988 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:28:37.411397   18988 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:28:37.411403   18988 start_flags.go:323] config:
	{Name:default-k8s-diff-port-254000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-254000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:37.415889   18988 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:37.423226   18988 out.go:177] * Starting control plane node default-k8s-diff-port-254000 in cluster default-k8s-diff-port-254000
	I0304 04:28:37.427148   18988 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:28:37.427163   18988 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:28:37.427170   18988 cache.go:56] Caching tarball of preloaded images
	I0304 04:28:37.427224   18988 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:28:37.427230   18988 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:28:37.427297   18988 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/default-k8s-diff-port-254000/config.json ...
	I0304 04:28:37.427309   18988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/default-k8s-diff-port-254000/config.json: {Name:mk11d45ccdf7ab29e7ba736a1482759d9a3c2dcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:28:37.427538   18988 start.go:365] acquiring machines lock for default-k8s-diff-port-254000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:37.427575   18988 start.go:369] acquired machines lock for "default-k8s-diff-port-254000" in 28µs
	I0304 04:28:37.427587   18988 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-254000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-254000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:28:37.427625   18988 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:28:37.432152   18988 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:28:37.450662   18988 start.go:159] libmachine.API.Create for "default-k8s-diff-port-254000" (driver="qemu2")
	I0304 04:28:37.450696   18988 client.go:168] LocalClient.Create starting
	I0304 04:28:37.450757   18988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:28:37.450798   18988 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:37.450809   18988 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:37.450851   18988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:28:37.450877   18988 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:37.450884   18988 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:37.451247   18988 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:28:37.594850   18988 main.go:141] libmachine: Creating SSH key...
	I0304 04:28:37.667899   18988 main.go:141] libmachine: Creating Disk image...
	I0304 04:28:37.667904   18988 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:28:37.668071   18988 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2
	I0304 04:28:37.680581   18988 main.go:141] libmachine: STDOUT: 
	I0304 04:28:37.680605   18988 main.go:141] libmachine: STDERR: 
	I0304 04:28:37.680667   18988 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2 +20000M
	I0304 04:28:37.692043   18988 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:28:37.692065   18988 main.go:141] libmachine: STDERR: 
	I0304 04:28:37.692079   18988 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2
	I0304 04:28:37.692083   18988 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:28:37.692112   18988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:10:40:0d:f8:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2
	I0304 04:28:37.694048   18988 main.go:141] libmachine: STDOUT: 
	I0304 04:28:37.694066   18988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:37.694091   18988 client.go:171] LocalClient.Create took 243.390375ms
	I0304 04:28:39.696236   18988 start.go:128] duration metric: createHost completed in 2.268569291s
	I0304 04:28:39.696288   18988 start.go:83] releasing machines lock for "default-k8s-diff-port-254000", held for 2.26871625s
	W0304 04:28:39.696349   18988 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:39.713383   18988 out.go:177] * Deleting "default-k8s-diff-port-254000" in qemu2 ...
	W0304 04:28:39.735589   18988 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:39.735610   18988 start.go:709] Will try again in 5 seconds ...
	I0304 04:28:44.737835   18988 start.go:365] acquiring machines lock for default-k8s-diff-port-254000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:44.738282   18988 start.go:369] acquired machines lock for "default-k8s-diff-port-254000" in 323.792µs
	I0304 04:28:44.738430   18988 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-254000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-254000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:28:44.738739   18988 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:28:44.748356   18988 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:28:44.797352   18988 start.go:159] libmachine.API.Create for "default-k8s-diff-port-254000" (driver="qemu2")
	I0304 04:28:44.797408   18988 client.go:168] LocalClient.Create starting
	I0304 04:28:44.797562   18988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:28:44.797622   18988 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:44.797646   18988 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:44.797719   18988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:28:44.797754   18988 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:44.797767   18988 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:44.798468   18988 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:28:44.953478   18988 main.go:141] libmachine: Creating SSH key...
	I0304 04:28:45.039772   18988 main.go:141] libmachine: Creating Disk image...
	I0304 04:28:45.039778   18988 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:28:45.039938   18988 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2
	I0304 04:28:45.052416   18988 main.go:141] libmachine: STDOUT: 
	I0304 04:28:45.052446   18988 main.go:141] libmachine: STDERR: 
	I0304 04:28:45.052495   18988 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2 +20000M
	I0304 04:28:45.063050   18988 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:28:45.063073   18988 main.go:141] libmachine: STDERR: 
	I0304 04:28:45.063088   18988 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2
	I0304 04:28:45.063095   18988 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:28:45.063123   18988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:6c:a9:7f:d6:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2
	I0304 04:28:45.064833   18988 main.go:141] libmachine: STDOUT: 
	I0304 04:28:45.064852   18988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:45.064863   18988 client.go:171] LocalClient.Create took 267.451458ms
	I0304 04:28:47.066994   18988 start.go:128] duration metric: createHost completed in 2.328229958s
	I0304 04:28:47.067059   18988 start.go:83] releasing machines lock for "default-k8s-diff-port-254000", held for 2.328769625s
	W0304 04:28:47.067338   18988 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-254000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-254000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:47.077704   18988 out.go:177] 
	W0304 04:28:47.081787   18988 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:47.081847   18988 out.go:239] * 
	* 
	W0304 04:28:47.083252   18988 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:28:47.091658   18988 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-254000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000: exit status 7 (45.949209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-254000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-159000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-159000 create -f testdata/busybox.yaml: exit status 1 (30.531792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-159000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-159000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000: exit status 7 (31.066583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-159000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000: exit status 7 (31.599458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-159000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-159000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-159000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-159000 describe deploy/metrics-server -n kube-system: exit status 1 (27.056458ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-159000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-159000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000: exit status 7 (31.077958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-159000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-159000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-159000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.169146917s)

                                                
                                                
-- stdout --
	* [embed-certs-159000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-159000 in cluster embed-certs-159000
	* Restarting existing qemu2 VM for "embed-certs-159000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-159000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:42.599004   19022 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:42.599402   19022 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:42.599407   19022 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:42.599409   19022 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:42.599586   19022 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:42.601045   19022 out.go:298] Setting JSON to false
	I0304 04:28:42.617532   19022 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10694,"bootTime":1709544628,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:28:42.617602   19022 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:28:42.622549   19022 out.go:177] * [embed-certs-159000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:28:42.629365   19022 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:28:42.629407   19022 notify.go:220] Checking for updates...
	I0304 04:28:42.636495   19022 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:28:42.637950   19022 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:28:42.641515   19022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:28:42.644493   19022 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:28:42.647559   19022 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:28:42.650871   19022 config.go:182] Loaded profile config "embed-certs-159000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:28:42.651139   19022 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:28:42.655487   19022 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:28:42.662440   19022 start.go:299] selected driver: qemu2
	I0304 04:28:42.662447   19022 start.go:903] validating driver "qemu2" against &{Name:embed-certs-159000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-159000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:42.662525   19022 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:28:42.664884   19022 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:28:42.664929   19022 cni.go:84] Creating CNI manager for ""
	I0304 04:28:42.664936   19022 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:28:42.664944   19022 start_flags.go:323] config:
	{Name:embed-certs-159000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-159000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:42.669366   19022 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:42.675571   19022 out.go:177] * Starting control plane node embed-certs-159000 in cluster embed-certs-159000
	I0304 04:28:42.679490   19022 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:28:42.679502   19022 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:28:42.679509   19022 cache.go:56] Caching tarball of preloaded images
	I0304 04:28:42.679557   19022 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:28:42.679563   19022 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:28:42.679619   19022 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/embed-certs-159000/config.json ...
	I0304 04:28:42.680054   19022 start.go:365] acquiring machines lock for embed-certs-159000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:42.680086   19022 start.go:369] acquired machines lock for "embed-certs-159000" in 26µs
	I0304 04:28:42.680094   19022 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:28:42.680098   19022 fix.go:54] fixHost starting: 
	I0304 04:28:42.680212   19022 fix.go:102] recreateIfNeeded on embed-certs-159000: state=Stopped err=<nil>
	W0304 04:28:42.680221   19022 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:28:42.681965   19022 out.go:177] * Restarting existing qemu2 VM for "embed-certs-159000" ...
	I0304 04:28:42.690516   19022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:26:5f:67:92:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2
	I0304 04:28:42.692581   19022 main.go:141] libmachine: STDOUT: 
	I0304 04:28:42.692605   19022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:42.692638   19022 fix.go:56] fixHost completed within 12.538584ms
	I0304 04:28:42.692642   19022 start.go:83] releasing machines lock for "embed-certs-159000", held for 12.551958ms
	W0304 04:28:42.692651   19022 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:42.692686   19022 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:42.692691   19022 start.go:709] Will try again in 5 seconds ...
	I0304 04:28:47.694748   19022 start.go:365] acquiring machines lock for embed-certs-159000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:47.694818   19022 start.go:369] acquired machines lock for "embed-certs-159000" in 51.542µs
	I0304 04:28:47.694833   19022 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:28:47.694838   19022 fix.go:54] fixHost starting: 
	I0304 04:28:47.695039   19022 fix.go:102] recreateIfNeeded on embed-certs-159000: state=Stopped err=<nil>
	W0304 04:28:47.695047   19022 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:28:47.698846   19022 out.go:177] * Restarting existing qemu2 VM for "embed-certs-159000" ...
	I0304 04:28:47.706878   19022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:26:5f:67:92:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/embed-certs-159000/disk.qcow2
	I0304 04:28:47.709861   19022 main.go:141] libmachine: STDOUT: 
	I0304 04:28:47.709893   19022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:47.709912   19022 fix.go:56] fixHost completed within 15.074125ms
	I0304 04:28:47.709917   19022 start.go:83] releasing machines lock for "embed-certs-159000", held for 15.092ms
	W0304 04:28:47.709989   19022 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-159000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-159000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:47.715817   19022 out.go:177] 
	W0304 04:28:47.719906   19022 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:47.719921   19022 out.go:239] * 
	* 
	W0304 04:28:47.720752   19022 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:28:47.729878   19022 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-159000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000: exit status 7 (45.243667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-159000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-254000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-254000 create -f testdata/busybox.yaml: exit status 1 (27.92325ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-254000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-254000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000: exit status 7 (32.098292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-254000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000: exit status 7 (31.615541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-254000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-254000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-254000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-254000 describe deploy/metrics-server -n kube-system: exit status 1 (28.072791ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-254000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-254000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000: exit status 7 (30.859417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-254000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-254000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-254000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.199992708s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-254000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-254000 in cluster default-k8s-diff-port-254000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-254000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-254000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:47.555954   19053 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:47.556095   19053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:47.556099   19053 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:47.556101   19053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:47.556246   19053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:47.557248   19053 out.go:298] Setting JSON to false
	I0304 04:28:47.573699   19053 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10699,"bootTime":1709544628,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:28:47.573757   19053 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:28:47.578857   19053 out.go:177] * [default-k8s-diff-port-254000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:28:47.585966   19053 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:28:47.586015   19053 notify.go:220] Checking for updates...
	I0304 04:28:47.589922   19053 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:28:47.592902   19053 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:28:47.595802   19053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:28:47.598893   19053 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:28:47.604869   19053 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:28:47.608192   19053 config.go:182] Loaded profile config "default-k8s-diff-port-254000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:28:47.608481   19053 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:28:47.612881   19053 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:28:47.619812   19053 start.go:299] selected driver: qemu2
	I0304 04:28:47.619825   19053 start.go:903] validating driver "qemu2" against &{Name:default-k8s-diff-port-254000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-254000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:47.619896   19053 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:28:47.622288   19053 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0304 04:28:47.622332   19053 cni.go:84] Creating CNI manager for ""
	I0304 04:28:47.622340   19053 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:28:47.622348   19053 start_flags.go:323] config:
	{Name:default-k8s-diff-port-254000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-2540
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:47.626750   19053 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:47.634811   19053 out.go:177] * Starting control plane node default-k8s-diff-port-254000 in cluster default-k8s-diff-port-254000
	I0304 04:28:47.638905   19053 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:28:47.638921   19053 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:28:47.638931   19053 cache.go:56] Caching tarball of preloaded images
	I0304 04:28:47.639000   19053 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:28:47.639006   19053 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:28:47.639082   19053 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/default-k8s-diff-port-254000/config.json ...
	I0304 04:28:47.639593   19053 start.go:365] acquiring machines lock for default-k8s-diff-port-254000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:47.639625   19053 start.go:369] acquired machines lock for "default-k8s-diff-port-254000" in 26.083µs
	I0304 04:28:47.639636   19053 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:28:47.639641   19053 fix.go:54] fixHost starting: 
	I0304 04:28:47.639760   19053 fix.go:102] recreateIfNeeded on default-k8s-diff-port-254000: state=Stopped err=<nil>
	W0304 04:28:47.639769   19053 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:28:47.643822   19053 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-254000" ...
	I0304 04:28:47.651717   19053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:6c:a9:7f:d6:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2
	I0304 04:28:47.653819   19053 main.go:141] libmachine: STDOUT: 
	I0304 04:28:47.653845   19053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:47.653875   19053 fix.go:56] fixHost completed within 14.233709ms
	I0304 04:28:47.653879   19053 start.go:83] releasing machines lock for "default-k8s-diff-port-254000", held for 14.248042ms
	W0304 04:28:47.653886   19053 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:47.653922   19053 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:47.653927   19053 start.go:709] Will try again in 5 seconds ...
	I0304 04:28:52.656071   19053 start.go:365] acquiring machines lock for default-k8s-diff-port-254000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:52.656481   19053 start.go:369] acquired machines lock for "default-k8s-diff-port-254000" in 330.333µs
	I0304 04:28:52.656609   19053 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:28:52.656627   19053 fix.go:54] fixHost starting: 
	I0304 04:28:52.657320   19053 fix.go:102] recreateIfNeeded on default-k8s-diff-port-254000: state=Stopped err=<nil>
	W0304 04:28:52.657346   19053 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:28:52.675017   19053 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-254000" ...
	I0304 04:28:52.679029   19053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:6c:a9:7f:d6:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/default-k8s-diff-port-254000/disk.qcow2
	I0304 04:28:52.688709   19053 main.go:141] libmachine: STDOUT: 
	I0304 04:28:52.688797   19053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:52.688907   19053 fix.go:56] fixHost completed within 32.275542ms
	I0304 04:28:52.688927   19053 start.go:83] releasing machines lock for "default-k8s-diff-port-254000", held for 32.424083ms
	W0304 04:28:52.689202   19053 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-254000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-254000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:52.697645   19053 out.go:177] 
	W0304 04:28:52.700876   19053 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:52.700901   19053 out.go:239] * 
	* 
	W0304 04:28:52.703597   19053 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:28:52.711854   19053 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-254000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000: exit status 7 (70.384042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-254000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-159000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000: exit status 7 (31.758834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-159000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-159000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-159000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-159000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.803167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-159000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-159000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000: exit status 7 (31.189709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-159000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-159000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000: exit status 7 (31.392916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-159000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-159000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-159000 --alsologtostderr -v=1: exit status 89 (43.140917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-159000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:47.977686   19072 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:47.977894   19072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:47.977897   19072 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:47.977899   19072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:47.978032   19072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:47.978263   19072 out.go:298] Setting JSON to false
	I0304 04:28:47.978270   19072 mustload.go:65] Loading cluster: embed-certs-159000
	I0304 04:28:47.978445   19072 config.go:182] Loaded profile config "embed-certs-159000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:28:47.982910   19072 out.go:177] * The control plane node must be running for this command
	I0304 04:28:47.986992   19072 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-159000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-159000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000: exit status 7 (31.392709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-159000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000: exit status 7 (30.9655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-159000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-538000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-538000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.911369667s)

                                                
                                                
-- stdout --
	* [newest-cni-538000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-538000 in cluster newest-cni-538000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-538000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:48.454121   19095 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:48.454262   19095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:48.454266   19095 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:48.454269   19095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:48.454401   19095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:48.455495   19095 out.go:298] Setting JSON to false
	I0304 04:28:48.471498   19095 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10700,"bootTime":1709544628,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:28:48.471573   19095 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:28:48.476333   19095 out.go:177] * [newest-cni-538000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:28:48.483208   19095 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:28:48.483265   19095 notify.go:220] Checking for updates...
	I0304 04:28:48.487398   19095 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:28:48.490403   19095 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:28:48.493264   19095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:28:48.496349   19095 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:28:48.499413   19095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:28:48.501262   19095 config.go:182] Loaded profile config "default-k8s-diff-port-254000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:28:48.501323   19095 config.go:182] Loaded profile config "multinode-386000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:28:48.501380   19095 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:28:48.505349   19095 out.go:177] * Using the qemu2 driver based on user configuration
	I0304 04:28:48.512294   19095 start.go:299] selected driver: qemu2
	I0304 04:28:48.512300   19095 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:28:48.512306   19095 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:28:48.514514   19095 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W0304 04:28:48.514545   19095 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0304 04:28:48.522409   19095 out.go:177] * Automatically selected the socket_vmnet network
	I0304 04:28:48.523996   19095 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0304 04:28:48.524042   19095 cni.go:84] Creating CNI manager for ""
	I0304 04:28:48.524049   19095 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:28:48.524054   19095 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:28:48.524060   19095 start_flags.go:323] config:
	{Name:newest-cni-538000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-538000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:48.528550   19095 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:48.535385   19095 out.go:177] * Starting control plane node newest-cni-538000 in cluster newest-cni-538000
	I0304 04:28:48.539285   19095 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0304 04:28:48.539317   19095 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0304 04:28:48.539324   19095 cache.go:56] Caching tarball of preloaded images
	I0304 04:28:48.539377   19095 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:28:48.539383   19095 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0304 04:28:48.539436   19095 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/newest-cni-538000/config.json ...
	I0304 04:28:48.539446   19095 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/newest-cni-538000/config.json: {Name:mkda9b6c7dfd665c740b46d040f9660043382538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:28:48.539651   19095 start.go:365] acquiring machines lock for newest-cni-538000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:48.539682   19095 start.go:369] acquired machines lock for "newest-cni-538000" in 25.834µs
	I0304 04:28:48.539693   19095 start.go:93] Provisioning new machine with config: &{Name:newest-cni-538000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-538000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:28:48.539724   19095 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:28:48.544426   19095 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:28:48.560901   19095 start.go:159] libmachine.API.Create for "newest-cni-538000" (driver="qemu2")
	I0304 04:28:48.560933   19095 client.go:168] LocalClient.Create starting
	I0304 04:28:48.561003   19095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:28:48.561031   19095 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:48.561039   19095 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:48.561077   19095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:28:48.561099   19095 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:48.561107   19095 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:48.561521   19095 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:28:48.702992   19095 main.go:141] libmachine: Creating SSH key...
	I0304 04:28:48.839540   19095 main.go:141] libmachine: Creating Disk image...
	I0304 04:28:48.839548   19095 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:28:48.839749   19095 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2
	I0304 04:28:48.852146   19095 main.go:141] libmachine: STDOUT: 
	I0304 04:28:48.852176   19095 main.go:141] libmachine: STDERR: 
	I0304 04:28:48.852231   19095 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2 +20000M
	I0304 04:28:48.862724   19095 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:28:48.862740   19095 main.go:141] libmachine: STDERR: 
	I0304 04:28:48.862758   19095 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2
	I0304 04:28:48.862764   19095 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:28:48.862790   19095 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:2f:90:3f:c3:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2
	I0304 04:28:48.864498   19095 main.go:141] libmachine: STDOUT: 
	I0304 04:28:48.864517   19095 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:48.864534   19095 client.go:171] LocalClient.Create took 303.596459ms
	I0304 04:28:50.866737   19095 start.go:128] duration metric: createHost completed in 2.327004209s
	I0304 04:28:50.866842   19095 start.go:83] releasing machines lock for "newest-cni-538000", held for 2.327145208s
	W0304 04:28:50.866925   19095 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:50.883843   19095 out.go:177] * Deleting "newest-cni-538000" in qemu2 ...
	W0304 04:28:50.910816   19095 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:50.910851   19095 start.go:709] Will try again in 5 seconds ...
	I0304 04:28:55.913015   19095 start.go:365] acquiring machines lock for newest-cni-538000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:55.913562   19095 start.go:369] acquired machines lock for "newest-cni-538000" in 429.417µs
	I0304 04:28:55.913720   19095 start.go:93] Provisioning new machine with config: &{Name:newest-cni-538000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-538000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0304 04:28:55.913960   19095 start.go:125] createHost starting for "" (driver="qemu2")
	I0304 04:28:55.919787   19095 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0304 04:28:55.965786   19095 start.go:159] libmachine.API.Create for "newest-cni-538000" (driver="qemu2")
	I0304 04:28:55.965842   19095 client.go:168] LocalClient.Create starting
	I0304 04:28:55.965936   19095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/ca.pem
	I0304 04:28:55.966033   19095 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:55.966049   19095 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:55.966109   19095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18284-15061/.minikube/certs/cert.pem
	I0304 04:28:55.966151   19095 main.go:141] libmachine: Decoding PEM data...
	I0304 04:28:55.966166   19095 main.go:141] libmachine: Parsing certificate...
	I0304 04:28:55.966779   19095 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0304 04:28:56.118539   19095 main.go:141] libmachine: Creating SSH key...
	I0304 04:28:56.264703   19095 main.go:141] libmachine: Creating Disk image...
	I0304 04:28:56.264711   19095 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0304 04:28:56.264878   19095 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2.raw /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2
	I0304 04:28:56.277763   19095 main.go:141] libmachine: STDOUT: 
	I0304 04:28:56.277782   19095 main.go:141] libmachine: STDERR: 
	I0304 04:28:56.277830   19095 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2 +20000M
	I0304 04:28:56.288649   19095 main.go:141] libmachine: STDOUT: Image resized.
	
	I0304 04:28:56.288662   19095 main.go:141] libmachine: STDERR: 
	I0304 04:28:56.288672   19095 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2
	I0304 04:28:56.288676   19095 main.go:141] libmachine: Starting QEMU VM...
	I0304 04:28:56.288714   19095 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:0f:5c:3b:5e:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2
	I0304 04:28:56.290412   19095 main.go:141] libmachine: STDOUT: 
	I0304 04:28:56.290426   19095 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:56.290442   19095 client.go:171] LocalClient.Create took 324.59725ms
	I0304 04:28:58.292603   19095 start.go:128] duration metric: createHost completed in 2.378626458s
	I0304 04:28:58.292750   19095 start.go:83] releasing machines lock for "newest-cni-538000", held for 2.379089166s
	W0304 04:28:58.293264   19095 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-538000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-538000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:58.302790   19095 out.go:177] 
	W0304 04:28:58.309978   19095 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:58.310007   19095 out.go:239] * 
	* 
	W0304 04:28:58.312571   19095 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:28:58.321773   19095 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-538000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-538000 -n newest-cni-538000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-538000 -n newest-cni-538000: exit status 7 (68.746833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-538000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-254000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000: exit status 7 (34.911083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-254000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-254000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-254000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-254000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.451208ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-254000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-254000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000: exit status 7 (31.324833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-254000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-254000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000: exit status 7 (30.889375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-254000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-254000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-254000 --alsologtostderr -v=1: exit status 89 (43.517833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-254000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:52.995863   19117 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:52.996029   19117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:52.996033   19117 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:52.996035   19117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:52.996176   19117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:52.996405   19117 out.go:298] Setting JSON to false
	I0304 04:28:52.996412   19117 mustload.go:65] Loading cluster: default-k8s-diff-port-254000
	I0304 04:28:52.996613   19117 config.go:182] Loaded profile config "default-k8s-diff-port-254000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:28:53.000818   19117 out.go:177] * The control plane node must be running for this command
	I0304 04:28:53.004789   19117 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-254000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-254000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000: exit status 7 (31.025333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-254000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000: exit status 7 (31.232584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-254000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-538000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-538000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.196420834s)

                                                
                                                
-- stdout --
	* [newest-cni-538000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-538000 in cluster newest-cni-538000
	* Restarting existing qemu2 VM for "newest-cni-538000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-538000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:28:58.664616   19157 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:28:58.664742   19157 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:58.664746   19157 out.go:304] Setting ErrFile to fd 2...
	I0304 04:28:58.664749   19157 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:28:58.664875   19157 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:28:58.665891   19157 out.go:298] Setting JSON to false
	I0304 04:28:58.681928   19157 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10710,"bootTime":1709544628,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:28:58.682002   19157 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:28:58.686041   19157 out.go:177] * [newest-cni-538000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:28:58.693160   19157 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:28:58.697043   19157 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:28:58.693204   19157 notify.go:220] Checking for updates...
	I0304 04:28:58.704096   19157 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:28:58.707072   19157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:28:58.710096   19157 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:28:58.713098   19157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:28:58.716375   19157 config.go:182] Loaded profile config "newest-cni-538000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0304 04:28:58.716640   19157 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:28:58.721035   19157 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:28:58.733003   19157 start.go:299] selected driver: qemu2
	I0304 04:28:58.733009   19157 start.go:903] validating driver "qemu2" against &{Name:newest-cni-538000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-538000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:58.733071   19157 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:28:58.735384   19157 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0304 04:28:58.735430   19157 cni.go:84] Creating CNI manager for ""
	I0304 04:28:58.735438   19157 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:28:58.735446   19157 start_flags.go:323] config:
	{Name:newest-cni-538000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-538000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:28:58.739997   19157 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:28:58.748000   19157 out.go:177] * Starting control plane node newest-cni-538000 in cluster newest-cni-538000
	I0304 04:28:58.752070   19157 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0304 04:28:58.752084   19157 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0304 04:28:58.752091   19157 cache.go:56] Caching tarball of preloaded images
	I0304 04:28:58.752144   19157 preload.go:174] Found /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0304 04:28:58.752150   19157 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0304 04:28:58.752216   19157 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/newest-cni-538000/config.json ...
	I0304 04:28:58.752708   19157 start.go:365] acquiring machines lock for newest-cni-538000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:28:58.752738   19157 start.go:369] acquired machines lock for "newest-cni-538000" in 23.666µs
	I0304 04:28:58.752746   19157 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:28:58.752752   19157 fix.go:54] fixHost starting: 
	I0304 04:28:58.752874   19157 fix.go:102] recreateIfNeeded on newest-cni-538000: state=Stopped err=<nil>
	W0304 04:28:58.752888   19157 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:28:58.757047   19157 out.go:177] * Restarting existing qemu2 VM for "newest-cni-538000" ...
	I0304 04:28:58.765115   19157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:0f:5c:3b:5e:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2
	I0304 04:28:58.767302   19157 main.go:141] libmachine: STDOUT: 
	I0304 04:28:58.767326   19157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:28:58.767361   19157 fix.go:56] fixHost completed within 14.609542ms
	I0304 04:28:58.767366   19157 start.go:83] releasing machines lock for "newest-cni-538000", held for 14.62375ms
	W0304 04:28:58.767374   19157 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:28:58.767406   19157 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:28:58.767412   19157 start.go:709] Will try again in 5 seconds ...
	I0304 04:29:03.767860   19157 start.go:365] acquiring machines lock for newest-cni-538000: {Name:mk8d988ba1fb58121f9398fed97785b27a82e55f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0304 04:29:03.768283   19157 start.go:369] acquired machines lock for "newest-cni-538000" in 330.625µs
	I0304 04:29:03.768409   19157 start.go:96] Skipping create...Using existing machine configuration
	I0304 04:29:03.768431   19157 fix.go:54] fixHost starting: 
	I0304 04:29:03.769119   19157 fix.go:102] recreateIfNeeded on newest-cni-538000: state=Stopped err=<nil>
	W0304 04:29:03.769147   19157 fix.go:128] unexpected machine state, will restart: <nil>
	I0304 04:29:03.779559   19157 out.go:177] * Restarting existing qemu2 VM for "newest-cni-538000" ...
	I0304 04:29:03.783521   19157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:0f:5c:3b:5e:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18284-15061/.minikube/machines/newest-cni-538000/disk.qcow2
	I0304 04:29:03.793363   19157 main.go:141] libmachine: STDOUT: 
	I0304 04:29:03.793421   19157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0304 04:29:03.793498   19157 fix.go:56] fixHost completed within 25.069917ms
	I0304 04:29:03.793514   19157 start.go:83] releasing machines lock for "newest-cni-538000", held for 25.207375ms
	W0304 04:29:03.793684   19157 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-538000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-538000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0304 04:29:03.802515   19157 out.go:177] 
	W0304 04:29:03.806585   19157 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0304 04:29:03.806661   19157 out.go:239] * 
	* 
	W0304 04:29:03.809202   19157 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:29:03.817491   19157 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-538000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-538000 -n newest-cni-538000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-538000 -n newest-cni-538000: exit status 7 (67.805042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-538000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-538000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-538000 -n newest-cni-538000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-538000 -n newest-cni-538000: exit status 7 (32.136334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-538000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-538000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-538000 --alsologtostderr -v=1: exit status 89 (43.409208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-538000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:29:04.006589   19175 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:29:04.006742   19175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:29:04.006748   19175 out.go:304] Setting ErrFile to fd 2...
	I0304 04:29:04.006751   19175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:29:04.006881   19175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:29:04.007094   19175 out.go:298] Setting JSON to false
	I0304 04:29:04.007102   19175 mustload.go:65] Loading cluster: newest-cni-538000
	I0304 04:29:04.007306   19175 config.go:182] Loaded profile config "newest-cni-538000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0304 04:29:04.010713   19175 out.go:177] * The control plane node must be running for this command
	I0304 04:29:04.014811   19175 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-538000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-538000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-538000 -n newest-cni-538000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-538000 -n newest-cni-538000: exit status 7 (32.080292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-538000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-538000 -n newest-cni-538000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-538000 -n newest-cni-538000: exit status 7 (31.449916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-538000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (87/251)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
9 TestDownloadOnly/v1.16.0/DeleteAll 0.24
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.28.4/json-events 20.79
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.23
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.29.0-rc.2/json-events 20.27
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.24
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.22
30 TestBinaryMirror 0.42
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 9.16
48 TestErrorSpam/start 0.39
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 0.18
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 6.03
64 TestFunctional/serial/CacheCmd/cache/add_local 1.18
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.12
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 1.28
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 5.44
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
135 TestFunctional/parallel/ProfileCmd/profile_list 0.11
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_addon-resizer_images 0.17
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
155 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.06
160 TestJSONOutput/start/Audit 0
162 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/pause/Audit 0
168 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/unpause/Audit 0
174 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/stop/Command 0.04
178 TestJSONOutput/stop/Audit 0
180 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
182 TestErrorJSONOutput 0.33
187 TestMainNoArgs 0.04
234 TestStoppedBinaryUpgrade/Setup 4.98
246 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
250 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
251 TestNoKubernetes/serial/ProfileList 31.28
252 TestNoKubernetes/serial/Stop 0.07
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
262 TestStoppedBinaryUpgrade/MinikubeLogs 0.71
271 TestStartStop/group/old-k8s-version/serial/Stop 0.07
272 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
276 TestStartStop/group/no-preload/serial/Stop 0.06
277 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.1
293 TestStartStop/group/embed-certs/serial/Stop 0.06
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
298 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.1
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
313 TestStartStop/group/newest-cni/serial/Stop 0.07
314 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.1
316 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
317 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-150000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-150000: exit status 85 (99.508584ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST |          |
	|         | -p download-only-150000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/04 04:04:13
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0304 04:04:13.502115   15488 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:04:13.502260   15488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:04:13.502264   15488 out.go:304] Setting ErrFile to fd 2...
	I0304 04:04:13.502266   15488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:04:13.502400   15488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	W0304 04:04:13.502485   15488 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18284-15061/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18284-15061/.minikube/config/config.json: no such file or directory
	I0304 04:04:13.503756   15488 out.go:298] Setting JSON to true
	I0304 04:04:13.521043   15488 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9225,"bootTime":1709544628,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:04:13.521120   15488 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:04:13.526733   15488 out.go:97] [download-only-150000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:04:13.530535   15488 out.go:169] MINIKUBE_LOCATION=18284
	W0304 04:04:13.526880   15488 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball: no such file or directory
	I0304 04:04:13.526905   15488 notify.go:220] Checking for updates...
	I0304 04:04:13.536628   15488 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:04:13.538116   15488 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:04:13.541688   15488 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:04:13.544691   15488 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	W0304 04:04:13.550630   15488 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0304 04:04:13.550828   15488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:04:13.553617   15488 out.go:97] Using the qemu2 driver based on user configuration
	I0304 04:04:13.553623   15488 start.go:299] selected driver: qemu2
	I0304 04:04:13.553625   15488 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:04:13.553673   15488 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:04:13.556659   15488 out.go:169] Automatically selected the socket_vmnet network
	I0304 04:04:13.561965   15488 start_flags.go:394] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0304 04:04:13.562070   15488 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0304 04:04:13.562161   15488 cni.go:84] Creating CNI manager for ""
	I0304 04:04:13.562180   15488 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0304 04:04:13.562185   15488 start_flags.go:323] config:
	{Name:download-only-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-150000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:04:13.567124   15488 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:04:13.571744   15488 out.go:97] Downloading VM boot image ...
	I0304 04:04:13.571783   15488 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso
	I0304 04:04:31.838820   15488 out.go:97] Starting control plane node download-only-150000 in cluster download-only-150000
	I0304 04:04:31.838848   15488 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0304 04:04:32.106764   15488 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0304 04:04:32.106821   15488 cache.go:56] Caching tarball of preloaded images
	I0304 04:04:32.107949   15488 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0304 04:04:32.113280   15488 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0304 04:04:32.113313   15488 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:04:32.713495   15488 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0304 04:04:51.553248   15488 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:04:51.553426   15488 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:04:52.193112   15488 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0304 04:04:52.193347   15488 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/download-only-150000/config.json ...
	I0304 04:04:52.193372   15488 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/download-only-150000/config.json: {Name:mkd0fc447bd9345c4f75479b3dc3e9e060131ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:04:52.194539   15488 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0304 04:04:52.194745   15488 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0304 04:04:52.828470   15488 out.go:169] 
	W0304 04:04:52.832586   15488 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/18284-15061/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10836f340 0x10836f340 0x10836f340 0x10836f340 0x10836f340 0x10836f340 0x10836f340] Decompressors:map[bz2:0x140004640d0 gz:0x140004640d8 tar:0x1400000ffe0 tar.bz2:0x1400000fff0 tar.gz:0x14000464090 tar.xz:0x140004640a0 tar.zst:0x140004640c0 tbz2:0x1400000fff0 tgz:0x14000464090 txz:0x140004640a0 tzst:0x140004640c0 xz:0x140004640e0 zip:0x14000464120 zst:0x140004640e8] Getters:map[file:0x140020ac7c0 http:0x1400072cb40 https:0x1400072cb90] Dir:false ProgressListene
r:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0304 04:04:52.832621   15488 out_reason.go:110] 
	W0304 04:04:52.840398   15488 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0304 04:04:52.844506   15488 out.go:169] 
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-150000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-150000
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (20.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-405000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-405000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (20.786167666s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (20.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-405000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-405000: exit status 85 (85.367084ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST |                     |
	|         | -p download-only-150000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:04 PST | 04 Mar 24 04:04 PST |
	| delete  | -p download-only-150000        | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST | 04 Mar 24 04:04 PST |
	| start   | -o=json --download-only        | download-only-405000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST |                     |
	|         | -p download-only-405000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/04 04:04:53
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0304 04:04:53.516293   15534 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:04:53.516469   15534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:04:53.516473   15534 out.go:304] Setting ErrFile to fd 2...
	I0304 04:04:53.516475   15534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:04:53.516610   15534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:04:53.517683   15534 out.go:298] Setting JSON to true
	I0304 04:04:53.533877   15534 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9265,"bootTime":1709544628,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:04:53.533939   15534 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:04:53.538416   15534 out.go:97] [download-only-405000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:04:53.542386   15534 out.go:169] MINIKUBE_LOCATION=18284
	I0304 04:04:53.538507   15534 notify.go:220] Checking for updates...
	I0304 04:04:53.549442   15534 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:04:53.557403   15534 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:04:53.560464   15534 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:04:53.564399   15534 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	W0304 04:04:53.570414   15534 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0304 04:04:53.570625   15534 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:04:53.573344   15534 out.go:97] Using the qemu2 driver based on user configuration
	I0304 04:04:53.573351   15534 start.go:299] selected driver: qemu2
	I0304 04:04:53.573355   15534 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:04:53.573399   15534 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:04:53.576385   15534 out.go:169] Automatically selected the socket_vmnet network
	I0304 04:04:53.581660   15534 start_flags.go:394] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0304 04:04:53.581760   15534 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0304 04:04:53.581795   15534 cni.go:84] Creating CNI manager for ""
	I0304 04:04:53.581804   15534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:04:53.581813   15534 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:04:53.581821   15534 start_flags.go:323] config:
	{Name:download-only-405000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-405000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:04:53.586275   15534 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:04:53.589452   15534 out.go:97] Starting control plane node download-only-405000 in cluster download-only-405000
	I0304 04:04:53.589462   15534 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:04:54.243420   15534 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:04:54.243502   15534 cache.go:56] Caching tarball of preloaded images
	I0304 04:04:54.245274   15534 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:04:54.250181   15534 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0304 04:04:54.250214   15534 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:04:54.838507   15534 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0304 04:05:12.193557   15534 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:05:12.193732   15534 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:05:12.775670   15534 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0304 04:05:12.775863   15534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/download-only-405000/config.json ...
	I0304 04:05:12.775879   15534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/download-only-405000/config.json: {Name:mkc22ad696c7b6719e5b546c6c66ea134851747f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:05:12.776126   15534 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0304 04:05:12.776241   15534 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/darwin/arm64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-405000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-405000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (20.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-719000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-719000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 : (20.27251725s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (20.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-719000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-719000: exit status 85 (75.924166ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST |                     |
	|         | -p download-only-150000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:04 PST | 04 Mar 24 04:04 PST |
	| delete  | -p download-only-150000           | download-only-150000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST | 04 Mar 24 04:04 PST |
	| start   | -o=json --download-only           | download-only-405000 | jenkins | v1.32.0 | 04 Mar 24 04:04 PST |                     |
	|         | -p download-only-405000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	| delete  | -p download-only-405000           | download-only-405000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST | 04 Mar 24 04:05 PST |
	| start   | -o=json --download-only           | download-only-719000 | jenkins | v1.32.0 | 04 Mar 24 04:05 PST |                     |
	|         | -p download-only-719000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/04 04:05:14
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0304 04:05:14.855360   15574 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:05:14.855500   15574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:05:14.855504   15574 out.go:304] Setting ErrFile to fd 2...
	I0304 04:05:14.855506   15574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:05:14.855621   15574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:05:14.856698   15574 out.go:298] Setting JSON to true
	I0304 04:05:14.872927   15574 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9286,"bootTime":1709544628,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:05:14.872988   15574 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:05:14.877677   15574 out.go:97] [download-only-719000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:05:14.881704   15574 out.go:169] MINIKUBE_LOCATION=18284
	I0304 04:05:14.877781   15574 notify.go:220] Checking for updates...
	I0304 04:05:14.889787   15574 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:05:14.892698   15574 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:05:14.894109   15574 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:05:14.897634   15574 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	W0304 04:05:14.903665   15574 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0304 04:05:14.903878   15574 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:05:14.906612   15574 out.go:97] Using the qemu2 driver based on user configuration
	I0304 04:05:14.906619   15574 start.go:299] selected driver: qemu2
	I0304 04:05:14.906623   15574 start.go:903] validating driver "qemu2" against <nil>
	I0304 04:05:14.906677   15574 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0304 04:05:14.909609   15574 out.go:169] Automatically selected the socket_vmnet network
	I0304 04:05:14.914822   15574 start_flags.go:394] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0304 04:05:14.914920   15574 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0304 04:05:14.914956   15574 cni.go:84] Creating CNI manager for ""
	I0304 04:05:14.914965   15574 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0304 04:05:14.914970   15574 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0304 04:05:14.914974   15574 start_flags.go:323] config:
	{Name:download-only-719000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-719000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:05:14.919269   15574 iso.go:125] acquiring lock: {Name:mk00f0d05fcd5690532357e58da54275ac5932b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0304 04:05:14.922601   15574 out.go:97] Starting control plane node download-only-719000 in cluster download-only-719000
	I0304 04:05:14.922608   15574 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0304 04:05:15.587915   15574 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0304 04:05:15.587995   15574 cache.go:56] Caching tarball of preloaded images
	I0304 04:05:15.589713   15574 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0304 04:05:15.594633   15574 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0304 04:05:15.594659   15574 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:05:16.177155   15574 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0304 04:05:32.710955   15574 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:05:32.711141   15574 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0304 04:05:33.266062   15574 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0304 04:05:33.266261   15574 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/download-only-719000/config.json ...
	I0304 04:05:33.266277   15574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18284-15061/.minikube/profiles/download-only-719000/config.json: {Name:mk8b3b6e8e31fa61aded81a97bc8e440bb8d0494 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0304 04:05:33.266523   15574 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0304 04:05:33.266647   15574 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18284-15061/.minikube/cache/darwin/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-719000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-719000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.42s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-685000 --alsologtostderr --binary-mirror http://127.0.0.1:52418 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-685000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-685000
--- PASS: TestBinaryMirror (0.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-038000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-038000: exit status 85 (65.631084ms)

                                                
                                                
-- stdout --
	* Profile "addons-038000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-038000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-038000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-038000: exit status 85 (61.344125ms)

                                                
                                                
-- stdout --
	* Profile "addons-038000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-038000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.16s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 status: exit status 7 (34.321791ms)

                                                
                                                
-- stdout --
	nospam-336000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 status: exit status 7 (31.971709ms)

                                                
                                                
-- stdout --
	nospam-336000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 status: exit status 7 (32.431209ms)

                                                
                                                
-- stdout --
	nospam-336000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 pause: exit status 89 (41.292042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-336000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 pause" failed: exit status 89
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 pause: exit status 89 (40.726708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-336000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 pause" failed: exit status 89
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 pause: exit status 89 (38.967417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-336000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 pause" failed: exit status 89
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 unpause: exit status 89 (41.751375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-336000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 unpause" failed: exit status 89
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 unpause: exit status 89 (40.90025ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-336000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 unpause" failed: exit status 89
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 unpause: exit status 89 (40.794042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p nospam-336000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 unpause" failed: exit status 89
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (0.18s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 stop
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-336000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-336000 stop
--- PASS: TestErrorSpam/stop (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18284-15061/.minikube/files/etc/test/nested/copy/15486/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-682000 cache add registry.k8s.io/pause:3.1: (2.112215s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-682000 cache add registry.k8s.io/pause:3.3: (2.110671167s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-682000 cache add registry.k8s.io/pause:latest: (1.810342417s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-682000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3789878967/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 cache add minikube-local-cache-test:functional-682000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 cache delete minikube-local-cache-test:functional-682000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-682000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 config get cpus: exit status 14 (32.084459ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 config get cpus: exit status 14 (36.12625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-682000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-682000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (157.24075ms)

                                                
                                                
-- stdout --
	* [functional-682000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:07:22.882788   16196 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:07:22.882939   16196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:07:22.882945   16196 out.go:304] Setting ErrFile to fd 2...
	I0304 04:07:22.882948   16196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:07:22.883111   16196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:07:22.884286   16196 out.go:298] Setting JSON to false
	I0304 04:07:22.903851   16196 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9414,"bootTime":1709544628,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:07:22.903910   16196 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:07:22.907839   16196 out.go:177] * [functional-682000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0304 04:07:22.910895   16196 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:07:22.914770   16196 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:07:22.910943   16196 notify.go:220] Checking for updates...
	I0304 04:07:22.921845   16196 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:07:22.924839   16196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:07:22.927904   16196 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:07:22.930890   16196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:07:22.934291   16196 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:07:22.934610   16196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:07:22.938836   16196 out.go:177] * Using the qemu2 driver based on existing profile
	I0304 04:07:22.945823   16196 start.go:299] selected driver: qemu2
	I0304 04:07:22.945831   16196 start.go:903] validating driver "qemu2" against &{Name:functional-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-682000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:07:22.945897   16196 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:07:22.952861   16196 out.go:177] 
	W0304 04:07:22.956690   16196 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0304 04:07:22.960842   16196 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-682000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-682000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-682000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (119.879208ms)

                                                
                                                
-- stdout --
	* [functional-682000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0304 04:07:23.110215   16207 out.go:291] Setting OutFile to fd 1 ...
	I0304 04:07:23.110342   16207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:07:23.110345   16207 out.go:304] Setting ErrFile to fd 2...
	I0304 04:07:23.110348   16207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0304 04:07:23.110476   16207 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18284-15061/.minikube/bin
	I0304 04:07:23.111890   16207 out.go:298] Setting JSON to false
	I0304 04:07:23.128584   16207 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9415,"bootTime":1709544628,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0304 04:07:23.128668   16207 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0304 04:07:23.133941   16207 out.go:177] * [functional-682000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0304 04:07:23.140726   16207 out.go:177]   - MINIKUBE_LOCATION=18284
	I0304 04:07:23.144794   16207 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	I0304 04:07:23.140784   16207 notify.go:220] Checking for updates...
	I0304 04:07:23.151816   16207 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0304 04:07:23.156337   16207 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0304 04:07:23.160217   16207 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	I0304 04:07:23.164242   16207 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0304 04:07:23.168693   16207 config.go:182] Loaded profile config "functional-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0304 04:07:23.168991   16207 driver.go:392] Setting default libvirt URI to qemu:///system
	I0304 04:07:23.172851   16207 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0304 04:07:23.179839   16207 start.go:299] selected driver: qemu2
	I0304 04:07:23.179852   16207 start.go:903] validating driver "qemu2" against &{Name:functional-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-682000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0304 04:07:23.179925   16207 start.go:914] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0304 04:07:23.186853   16207 out.go:177] 
	W0304 04:07:23.190833   16207 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0304 04:07:23.194839   16207 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.277167125s)
--- PASS: TestFunctional/parallel/License (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.400332584s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-682000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-682000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image rm gcr.io/google-containers/addon-resizer:functional-682000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-682000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 image save --daemon gcr.io/google-containers/addon-resizer:functional-682000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-682000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "71.615625ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "36.261333ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "72.699625ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "35.85775ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013349375s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-682000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-682000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-682000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-682000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.06s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-277000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-883000 --output=json --user=testUser
--- PASS: TestJSONOutput/stop/Command (0.04s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-418000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-418000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.947125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"87317f82-3465-485b-8013-c36972320525","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-418000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0fe279c8-0cfa-44f7-9595-224fa7c291e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18284"}}
	{"specversion":"1.0","id":"c46684ba-8e5b-4813-b75b-f585faaf011b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig"}}
	{"specversion":"1.0","id":"80cf52b3-8713-46d0-862a-0dc80e6b93d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"14bd8f61-d08c-4a61-9a5b-ba52c69acb3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ba7a61a7-3890-4e52-87b7-9e41353dea4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube"}}
	{"specversion":"1.0","id":"34a50952-1436-4c69-b046-4463ad441420","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"24fb7d35-1372-4d8b-80c1-6a5134a28bce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-418000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-418000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (109.840083ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-980000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18284
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18284-15061/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18284-15061/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-980000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-980000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (44.357208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-980000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.661304166s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.619206292s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-980000
--- PASS: TestNoKubernetes/serial/Stop (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-980000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-980000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (45.80375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-980000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-289000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-394000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-394000 -n old-k8s-version-394000: exit status 7 (31.550458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-394000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-155000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-155000 -n no-preload-155000: exit status 7 (30.915583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-155000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-159000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-159000 -n embed-certs-159000: exit status 7 (31.287292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-159000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-254000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-254000 -n default-k8s-diff-port-254000: exit status 7 (33.397917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-254000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-538000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-538000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-538000 -n newest-cni-538000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-538000 -n newest-cni-538000: exit status 7 (32.395958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-538000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/251)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-682000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3739168433/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709554004836721000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3739168433/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709554004836721000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3739168433/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709554004836721000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3739168433/001/test-1709554004836721000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (52.809834ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (85.151ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (86.355375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (90.044792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (86.997833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (92.9315ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (85.199417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "sudo umount -f /mount-9p": exit status 89 (48.65ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-682000 ssh \"sudo umount -f /mount-9p\"": exit status 89
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-682000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3739168433/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (13.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-682000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2883587142/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (60.828459ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (90.4125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (80.185791ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (88.264958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (90.760833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (90.6085ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T /mount-9p | grep 9p": exit status 89 (86.395666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "sudo umount -f /mount-9p": exit status 89 (49.767334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-682000 ssh \"sudo umount -f /mount-9p\"": exit status 89
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-682000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2883587142/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-682000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3584812207/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-682000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3584812207/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-682000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3584812207/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1: exit status 89 (74.986083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1: exit status 89 (84.840834ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1: exit status 89 (86.466208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1: exit status 89 (88.391958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1: exit status 89 (88.465ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1: exit status 89 (87.507833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1: exit status 89 (88.552958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-682000 ssh "findmnt -T" /mount1: exit status 89 (87.7625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p functional-682000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-682000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3584812207/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-682000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3584812207/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-682000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3584812207/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.55s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-315000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                
----------------------- debugLogs end: cilium-315000 [took: 2.423292125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-315000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-315000
--- SKIP: TestNetworkPlugins/group/cilium (2.67s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-370000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-370000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard